00:00:00.000 Started by upstream project "autotest-per-patch" build number 121335 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.068 The recommended git tool is: git 00:00:00.068 using credential 00000000-0000-0000-0000-000000000002 00:00:00.070 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.125 Fetching changes from the remote Git repository 00:00:00.127 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.176 Using shallow fetch with depth 1 00:00:00.176 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.176 > git --version # timeout=10 00:00:00.218 > git --version # 'git version 2.39.2' 00:00:00.218 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.219 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.219 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.392 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.401 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.411 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:04.411 > git config core.sparsecheckout # timeout=10 00:00:04.422 > git read-tree -mu HEAD # timeout=10 00:00:04.436 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:04.461 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:04.461 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:04.542 [Pipeline] Start of Pipeline 00:00:04.561 [Pipeline] library 00:00:04.563 Loading library shm_lib@master 00:00:04.564 Library shm_lib@master is cached. Copying from home. 00:00:04.586 [Pipeline] node 00:00:04.598 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.600 [Pipeline] { 00:00:04.614 [Pipeline] catchError 00:00:04.616 [Pipeline] { 00:00:04.630 [Pipeline] wrap 00:00:04.640 [Pipeline] { 00:00:04.649 [Pipeline] stage 00:00:04.650 [Pipeline] { (Prologue) 00:00:04.856 [Pipeline] sh 00:00:05.137 + logger -p user.info -t JENKINS-CI 00:00:05.158 [Pipeline] echo 00:00:05.159 Node: WFP8 00:00:05.164 [Pipeline] sh 00:00:05.458 [Pipeline] setCustomBuildProperty 00:00:05.469 [Pipeline] echo 00:00:05.471 Cleanup processes 00:00:05.475 [Pipeline] sh 00:00:05.757 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.757 1398627 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.774 [Pipeline] sh 00:00:06.062 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.062 ++ grep -v 'sudo pgrep' 00:00:06.062 ++ awk '{print $1}' 00:00:06.062 + sudo kill -9 00:00:06.062 + true 00:00:06.076 [Pipeline] cleanWs 00:00:06.084 [WS-CLEANUP] Deleting project workspace... 00:00:06.084 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.090 [WS-CLEANUP] done 00:00:06.096 [Pipeline] setCustomBuildProperty 00:00:06.113 [Pipeline] sh 00:00:06.390 + sudo git config --global --replace-all safe.directory '*' 00:00:06.452 [Pipeline] nodesByLabel 00:00:06.454 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.463 [Pipeline] httpRequest 00:00:06.468 HttpMethod: GET 00:00:06.469 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:06.472 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:06.474 Response Code: HTTP/1.1 200 OK 00:00:06.474 Success: Status code 200 is in the accepted range: 200,404 00:00:06.475 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:07.371 [Pipeline] sh 00:00:07.652 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:07.674 [Pipeline] httpRequest 00:00:07.679 HttpMethod: GET 00:00:07.680 URL: http://10.211.164.96/packages/spdk_d4fbb5733e2eaefcd7ce9a66f1ea6db59726d6f2.tar.gz 00:00:07.680 Sending request to url: http://10.211.164.96/packages/spdk_d4fbb5733e2eaefcd7ce9a66f1ea6db59726d6f2.tar.gz 00:00:07.695 Response Code: HTTP/1.1 200 OK 00:00:07.696 Success: Status code 200 is in the accepted range: 200,404 00:00:07.696 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d4fbb5733e2eaefcd7ce9a66f1ea6db59726d6f2.tar.gz 00:00:39.087 [Pipeline] sh 00:00:39.369 + tar --no-same-owner -xf spdk_d4fbb5733e2eaefcd7ce9a66f1ea6db59726d6f2.tar.gz 00:00:41.913 [Pipeline] sh 00:00:42.209 + git -C spdk log --oneline -n5 00:00:42.209 d4fbb5733 trace: add trace_flags_fini() 00:00:42.209 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:00:42.209 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:00:42.209 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:00:42.209 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:00:42.223 [Pipeline] } 00:00:42.243 [Pipeline] // stage 00:00:42.254 [Pipeline] stage 00:00:42.258 [Pipeline] { (Prepare) 00:00:42.279 [Pipeline] writeFile 00:00:42.297 [Pipeline] sh 00:00:42.603 + logger -p user.info -t JENKINS-CI 00:00:42.633 [Pipeline] sh 00:00:42.952 + logger -p user.info -t JENKINS-CI 00:00:42.965 [Pipeline] sh 00:00:43.245 + cat autorun-spdk.conf 00:00:43.245 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.245 SPDK_TEST_NVMF=1 00:00:43.245 SPDK_TEST_NVME_CLI=1 00:00:43.245 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.245 SPDK_TEST_NVMF_NICS=e810 00:00:43.245 SPDK_TEST_VFIOUSER=1 00:00:43.245 SPDK_RUN_UBSAN=1 00:00:43.245 NET_TYPE=phy 00:00:43.253 RUN_NIGHTLY=0 00:00:43.257 [Pipeline] readFile 00:00:43.282 [Pipeline] withEnv 00:00:43.284 [Pipeline] { 00:00:43.296 [Pipeline] sh 00:00:43.578 + set -ex 00:00:43.578 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:43.578 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:43.578 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:43.578 ++ SPDK_TEST_NVMF=1 00:00:43.578 ++ SPDK_TEST_NVME_CLI=1 00:00:43.578 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:43.578 ++ SPDK_TEST_NVMF_NICS=e810 00:00:43.578 ++ SPDK_TEST_VFIOUSER=1 00:00:43.578 ++ SPDK_RUN_UBSAN=1 00:00:43.578 ++ NET_TYPE=phy 00:00:43.578 ++ RUN_NIGHTLY=0 00:00:43.578 + case $SPDK_TEST_NVMF_NICS in 00:00:43.578 + DRIVERS=ice 00:00:43.578 + [[ tcp == \r\d\m\a ]] 00:00:43.578 + [[ -n ice ]] 00:00:43.578 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:43.578 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:43.578 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:43.578 rmmod: ERROR: Module irdma is not currently loaded 00:00:43.578 rmmod: ERROR: Module i40iw is not currently loaded 00:00:43.578 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:43.578 + true 00:00:43.578 + for D in $DRIVERS 00:00:43.578 + sudo modprobe ice 00:00:43.578 + exit 0 00:00:43.589 [Pipeline] } 00:00:43.611 [Pipeline] // withEnv 00:00:43.615 [Pipeline] } 00:00:43.628 [Pipeline] // stage 00:00:43.635 [Pipeline] catchError 00:00:43.636 [Pipeline] { 00:00:43.649 [Pipeline] timeout 00:00:43.649 Timeout set to expire in 40 min 00:00:43.651 [Pipeline] { 00:00:43.665 [Pipeline] stage 00:00:43.667 [Pipeline] { (Tests) 00:00:43.682 [Pipeline] sh 00:00:43.962 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.962 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.962 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.962 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:43.962 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:43.962 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:43.962 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:43.962 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:43.962 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:43.962 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:43.962 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:43.962 + source /etc/os-release 00:00:43.962 ++ NAME='Fedora Linux' 00:00:43.962 ++ VERSION='38 (Cloud Edition)' 00:00:43.962 ++ ID=fedora 00:00:43.962 ++ VERSION_ID=38 00:00:43.962 ++ VERSION_CODENAME= 00:00:43.962 ++ PLATFORM_ID=platform:f38 00:00:43.962 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:43.962 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:43.962 ++ LOGO=fedora-logo-icon 00:00:43.962 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:43.962 ++ HOME_URL=https://fedoraproject.org/ 00:00:43.962 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:43.962 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:43.962 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:43.962 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:43.962 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:43.962 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:43.962 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:43.962 ++ SUPPORT_END=2024-05-14 00:00:43.962 ++ VARIANT='Cloud Edition' 00:00:43.962 ++ VARIANT_ID=cloud 00:00:43.962 + uname -a 00:00:43.962 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:43.962 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:46.492 Hugepages 00:00:46.492 node hugesize free / total 00:00:46.492 node0 1048576kB 0 / 0 00:00:46.492 node0 2048kB 0 / 0 00:00:46.492 node1 1048576kB 0 / 0 00:00:46.492 node1 2048kB 0 / 0 00:00:46.492 00:00:46.492 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:46.492 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:46.492 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:46.492 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:46.492 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:46.492 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:46.492 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:46.492 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:46.492 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:46.492 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:46.492 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:46.492 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:46.492 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:46.492 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:46.492 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:46.492 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:46.492 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:46.492 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:46.492 + rm -f /tmp/spdk-ld-path 00:00:46.492 + source autorun-spdk.conf 00:00:46.492 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.492 ++ SPDK_TEST_NVMF=1 00:00:46.492 ++ SPDK_TEST_NVME_CLI=1 00:00:46.492 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.492 ++ SPDK_TEST_NVMF_NICS=e810 00:00:46.492 ++ SPDK_TEST_VFIOUSER=1 00:00:46.492 ++ SPDK_RUN_UBSAN=1 00:00:46.492 ++ NET_TYPE=phy 00:00:46.492 ++ RUN_NIGHTLY=0 00:00:46.492 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:46.492 + [[ -n '' ]] 00:00:46.492 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.492 + for M in /var/spdk/build-*-manifest.txt 00:00:46.492 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:46.492 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:46.492 + for M in /var/spdk/build-*-manifest.txt 00:00:46.492 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:46.492 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:46.492 ++ uname 00:00:46.492 + [[ Linux == \L\i\n\u\x ]] 00:00:46.492 + sudo dmesg -T 00:00:46.492 + sudo dmesg --clear 00:00:46.492 + dmesg_pid=1399536 00:00:46.492 + [[ Fedora Linux == FreeBSD ]] 00:00:46.492 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.492 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:46.492 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:46.492 + [[ -x /usr/src/fio-static/fio ]] 00:00:46.492 + export FIO_BIN=/usr/src/fio-static/fio 00:00:46.492 + FIO_BIN=/usr/src/fio-static/fio 00:00:46.492 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:46.492 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:46.492 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:46.492 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.492 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:46.492 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:46.492 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.492 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:46.492 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:46.492 + sudo dmesg -Tw 00:00:46.492 Test configuration: 00:00:46.492 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.492 SPDK_TEST_NVMF=1 00:00:46.492 SPDK_TEST_NVME_CLI=1 00:00:46.492 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.492 SPDK_TEST_NVMF_NICS=e810 00:00:46.492 SPDK_TEST_VFIOUSER=1 00:00:46.492 SPDK_RUN_UBSAN=1 00:00:46.492 NET_TYPE=phy 00:00:46.492 RUN_NIGHTLY=0 00:34:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:46.492 00:34:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:46.492 00:34:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:46.492 00:34:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:46.493 00:34:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.493 00:34:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.493 00:34:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.493 00:34:39 -- paths/export.sh@5 -- $ export PATH 00:00:46.493 00:34:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:46.493 00:34:39 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:46.493 00:34:39 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:46.493 00:34:39 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714170879.XXXXXX 00:00:46.493 00:34:39 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714170879.e5sFLt 00:00:46.493 00:34:39 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:46.493 00:34:39 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:46.493 00:34:39 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:46.493 00:34:39 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:46.493 00:34:39 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:46.493 00:34:39 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:46.493 00:34:39 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:46.493 00:34:39 -- common/autotest_common.sh@10 -- $ set +x 00:00:46.493 00:34:39 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:46.493 00:34:39 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:46.493 00:34:39 -- pm/common@17 -- $ local monitor 00:00:46.493 00:34:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.493 00:34:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1399570 00:00:46.493 00:34:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.493 00:34:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1399572 00:00:46.493 00:34:39 -- pm/common@21 -- $ date +%s 00:00:46.493 00:34:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.493 00:34:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1399575 00:00:46.493 00:34:39 -- pm/common@21 -- $ date +%s 00:00:46.493 00:34:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:46.493 00:34:39 -- pm/common@21 -- $ date +%s 00:00:46.493 00:34:39 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1399578 00:00:46.493 00:34:39 -- pm/common@26 -- $ sleep 1 00:00:46.493 00:34:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714170879 00:00:46.493 00:34:39 -- pm/common@21 -- $ date +%s 00:00:46.493 00:34:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714170879 00:00:46.493 00:34:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714170879 00:00:46.493 00:34:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714170879 00:00:46.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714170879_collect-cpu-load.pm.log 00:00:46.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714170879_collect-vmstat.pm.log 00:00:46.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714170879_collect-bmc-pm.bmc.pm.log 00:00:46.752 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714170879_collect-cpu-temp.pm.log 00:00:47.686 00:34:40 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:47.686 00:34:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:47.686 00:34:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:47.686 00:34:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.686 00:34:40 -- spdk/autobuild.sh@16 -- $ date -u 00:00:47.686 Fri Apr 26 10:34:40 PM UTC 2024 00:00:47.686 00:34:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:47.686 v24.05-pre-450-gd4fbb5733 00:00:47.686 00:34:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:47.686 00:34:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:47.686 00:34:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:47.686 00:34:40 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:47.686 00:34:40 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:47.686 00:34:40 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.686 ************************************ 00:00:47.686 START TEST ubsan 00:00:47.686 ************************************ 00:00:47.686 00:34:40 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:47.686 using ubsan 00:00:47.686 00:00:47.686 real 0m0.000s 00:00:47.686 user 0m0.000s 00:00:47.686 sys 0m0.000s 00:00:47.686 00:34:40 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:47.686 00:34:40 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.686 ************************************ 00:00:47.686 END TEST ubsan 00:00:47.686 ************************************ 00:00:47.944 00:34:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:47.944 00:34:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:47.944 00:34:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:47.944 00:34:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:47.944 00:34:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:47.944 00:34:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:47.944 00:34:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:47.944 00:34:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:47.944 00:34:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:47.944 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:47.944 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:48.254 Using 'verbs' RDMA provider 00:01:01.022 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:11.003 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:11.003 Creating mk/config.mk...done. 00:01:11.003 Creating mk/cc.flags.mk...done. 00:01:11.003 Type 'make' to build. 00:01:11.003 00:35:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:11.003 00:35:03 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:11.003 00:35:03 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:11.003 00:35:03 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.003 ************************************ 00:01:11.003 START TEST make 00:01:11.003 ************************************ 00:01:11.003 00:35:03 -- common/autotest_common.sh@1111 -- $ make -j96 00:01:11.261 make[1]: Nothing to be done for 'all'. 00:01:12.641 The Meson build system 00:01:12.641 Version: 1.3.1 00:01:12.641 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:12.641 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:12.641 Build type: native build 00:01:12.641 Project name: libvfio-user 00:01:12.641 Project version: 0.0.1 00:01:12.641 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:12.641 C linker for the host machine: cc ld.bfd 2.39-16 00:01:12.641 Host machine cpu family: x86_64 00:01:12.641 Host machine cpu: x86_64 00:01:12.641 Run-time dependency threads found: YES 00:01:12.641 Library dl found: YES 00:01:12.641 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:12.641 Run-time dependency json-c found: YES 0.17 00:01:12.641 Run-time dependency cmocka found: YES 1.1.7 00:01:12.641 Program pytest-3 found: NO 00:01:12.641 Program flake8 found: NO 00:01:12.641 Program misspell-fixer found: NO 00:01:12.641 Program restructuredtext-lint found: NO 00:01:12.641 Program valgrind found: YES (/usr/bin/valgrind) 00:01:12.641 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:12.641 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:12.641 Compiler for C supports arguments -Wwrite-strings: YES 00:01:12.641 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:12.641 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:12.641 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:12.641 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:12.641 Build targets in project: 8 00:01:12.641 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:12.641 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:12.641 00:01:12.641 libvfio-user 0.0.1 00:01:12.641 00:01:12.641 User defined options 00:01:12.641 buildtype : debug 00:01:12.641 default_library: shared 00:01:12.641 libdir : /usr/local/lib 00:01:12.641 00:01:12.641 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:12.899 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:13.157 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:13.157 [2/37] Compiling C object samples/null.p/null.c.o 00:01:13.157 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:13.157 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:13.157 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:13.157 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:13.157 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:13.157 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:13.157 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:13.157 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:13.157 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:13.157 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:13.157 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:13.157 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:13.157 [15/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:13.158 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:13.158 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:13.158 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:13.158 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:13.158 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:13.158 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:13.158 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:13.158 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:13.158 [24/37] Compiling C object samples/server.p/server.c.o 00:01:13.158 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:13.158 [26/37] Compiling C object samples/client.p/client.c.o 00:01:13.158 [27/37] Linking target samples/client 00:01:13.415 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:13.415 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:13.415 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:13.415 [31/37] Linking target test/unit_tests 00:01:13.415 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:13.415 [33/37] Linking target samples/gpio-pci-idio-16 00:01:13.415 [34/37] Linking target samples/null 00:01:13.415 [35/37] Linking target samples/server 00:01:13.415 [36/37] Linking target samples/lspci 00:01:13.415 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:13.415 INFO: autodetecting backend as ninja 00:01:13.415 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:13.673 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:13.933 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:13.933 ninja: no work to do. 00:01:19.223 The Meson build system 00:01:19.223 Version: 1.3.1 00:01:19.223 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:19.223 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:19.223 Build type: native build 00:01:19.223 Program cat found: YES (/usr/bin/cat) 00:01:19.223 Project name: DPDK 00:01:19.223 Project version: 23.11.0 00:01:19.223 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:19.224 C linker for the host machine: cc ld.bfd 2.39-16 00:01:19.224 Host machine cpu family: x86_64 00:01:19.224 Host machine cpu: x86_64 00:01:19.224 Message: ## Building in Developer Mode ## 00:01:19.224 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:19.224 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:19.224 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:19.224 Program python3 found: YES (/usr/bin/python3) 00:01:19.224 Program cat found: YES (/usr/bin/cat) 00:01:19.224 Compiler for C supports arguments -march=native: YES 00:01:19.224 Checking for size of "void *" : 8 00:01:19.224 Checking for size of "void *" : 8 (cached) 00:01:19.224 Library m found: YES 00:01:19.224 Library numa found: YES 00:01:19.224 Has header "numaif.h" : YES 00:01:19.224 Library fdt found: NO 00:01:19.224 Library execinfo found: NO 00:01:19.224 Has header "execinfo.h" : YES 00:01:19.224 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:19.224 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:19.224 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:19.224 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:19.224 Run-time dependency openssl found: YES 3.0.9 00:01:19.224 Run-time dependency libpcap found: YES 1.10.4 00:01:19.224 Has header "pcap.h" with dependency libpcap: YES 00:01:19.224 Compiler for C supports arguments -Wcast-qual: YES 00:01:19.224 Compiler for C supports arguments -Wdeprecated: YES 00:01:19.224 Compiler for C supports arguments -Wformat: YES 00:01:19.224 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:19.224 Compiler for C supports arguments -Wformat-security: NO 00:01:19.224 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:19.224 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:19.224 Compiler for C supports arguments -Wnested-externs: YES 00:01:19.224 Compiler for C supports arguments -Wold-style-definition: YES 00:01:19.224 Compiler for C supports arguments -Wpointer-arith: YES 00:01:19.224 Compiler for C supports arguments -Wsign-compare: YES 00:01:19.224 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:19.224 Compiler for C supports arguments -Wundef: YES 00:01:19.224 Compiler for C supports arguments -Wwrite-strings: YES 00:01:19.224 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:19.224 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:19.224 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:19.224 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:19.224 Program objdump found: YES (/usr/bin/objdump) 00:01:19.224 Compiler for C supports arguments -mavx512f: YES 00:01:19.224 Checking if "AVX512 checking" compiles: YES 00:01:19.224 Fetching value of define "__SSE4_2__" : 1 00:01:19.224 Fetching value of define "__AES__" : 1 00:01:19.224 Fetching value of define "__AVX__" : 1 00:01:19.224 Fetching value of define "__AVX2__" : 1 00:01:19.224 Fetching value of define "__AVX512BW__" : 1 00:01:19.224 Fetching value of define "__AVX512CD__" : 1 00:01:19.224 Fetching value of define "__AVX512DQ__" : 1 00:01:19.224 Fetching value of define "__AVX512F__" : 1 00:01:19.224 Fetching value of define "__AVX512VL__" : 1 00:01:19.224 Fetching value of define "__PCLMUL__" : 1 00:01:19.224 Fetching value of define "__RDRND__" : 1 00:01:19.224 Fetching value of define "__RDSEED__" : 1 00:01:19.224 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:19.224 Fetching value of define "__znver1__" : (undefined) 00:01:19.224 Fetching value of define "__znver2__" : (undefined) 00:01:19.224 Fetching value of define "__znver3__" : (undefined) 00:01:19.224 Fetching value of define "__znver4__" : (undefined) 00:01:19.224 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:19.224 Message: lib/log: Defining dependency "log" 00:01:19.224 Message: lib/kvargs: Defining dependency "kvargs" 00:01:19.224 Message: lib/telemetry: Defining dependency "telemetry" 00:01:19.224 Checking for function "getentropy" : NO 00:01:19.224 Message: lib/eal: Defining dependency "eal" 00:01:19.224 Message: lib/ring: Defining dependency "ring" 00:01:19.224 Message: lib/rcu: Defining dependency "rcu" 00:01:19.224 Message: lib/mempool: Defining dependency "mempool" 00:01:19.224 Message: lib/mbuf: Defining dependency "mbuf" 00:01:19.224 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:19.224 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:19.224 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:19.224 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:19.224 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:19.224 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:19.224 Compiler for C supports arguments -mpclmul: YES 00:01:19.224 Compiler for C supports arguments -maes: YES 00:01:19.224 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:19.224 Compiler for C supports arguments -mavx512bw: YES 00:01:19.224 Compiler for C supports arguments -mavx512dq: YES 00:01:19.224 Compiler for C supports arguments -mavx512vl: YES 00:01:19.224 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:19.224 Compiler for C supports arguments -mavx2: YES 00:01:19.224 Compiler for C supports arguments -mavx: YES 00:01:19.224 Message: lib/net: Defining dependency "net" 00:01:19.224 Message: lib/meter: Defining dependency "meter" 00:01:19.224 Message: lib/ethdev: Defining dependency "ethdev" 00:01:19.224 Message: lib/pci: Defining dependency "pci" 00:01:19.224 Message: lib/cmdline: Defining dependency "cmdline" 00:01:19.224 Message: lib/hash: Defining dependency "hash" 00:01:19.224 Message: lib/timer: Defining dependency "timer" 00:01:19.224 Message: lib/compressdev: Defining dependency "compressdev" 00:01:19.224 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:19.224 Message: lib/dmadev: Defining dependency "dmadev" 00:01:19.224 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:19.224 Message: lib/power: Defining dependency "power" 00:01:19.224 Message: lib/reorder: Defining dependency "reorder" 00:01:19.224 Message: lib/security: Defining dependency "security" 00:01:19.224 Has header "linux/userfaultfd.h" : YES 00:01:19.224 Has header "linux/vduse.h" : YES 00:01:19.224 Message: lib/vhost: Defining dependency "vhost" 00:01:19.224 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:19.224 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:19.224 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:19.224 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:19.224 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:19.224 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:19.224 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:19.224 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:19.224 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:19.224 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:19.224 Program doxygen found: YES (/usr/bin/doxygen) 00:01:19.224 Configuring doxy-api-html.conf using configuration 00:01:19.224 Configuring doxy-api-man.conf using configuration 00:01:19.224 Program mandb found: YES (/usr/bin/mandb) 00:01:19.224 Program sphinx-build found: NO 00:01:19.224 Configuring rte_build_config.h using configuration 00:01:19.224 Message: 00:01:19.224 ================= 00:01:19.224 Applications Enabled 00:01:19.224 ================= 00:01:19.224 00:01:19.224 apps: 00:01:19.224 00:01:19.224 00:01:19.224 Message: 00:01:19.224 ================= 00:01:19.224 Libraries Enabled 00:01:19.224 ================= 00:01:19.224 00:01:19.224 libs: 00:01:19.224 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:19.224 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:19.224 cryptodev, dmadev, power, reorder, security, vhost, 00:01:19.224 00:01:19.224 Message: 00:01:19.224 =============== 00:01:19.224 Drivers Enabled 00:01:19.224 =============== 00:01:19.224 00:01:19.224 common: 00:01:19.224 00:01:19.224 bus: 00:01:19.224 pci, vdev, 00:01:19.224 mempool: 00:01:19.224 ring, 00:01:19.224 dma: 00:01:19.224 00:01:19.224 net: 00:01:19.224 00:01:19.224 crypto: 00:01:19.224 00:01:19.224 compress: 00:01:19.224 00:01:19.224 vdpa: 00:01:19.224 00:01:19.224 00:01:19.224 Message: 00:01:19.224 ================= 00:01:19.224 Content Skipped 00:01:19.224 ================= 00:01:19.224 00:01:19.224 apps: 00:01:19.224 dumpcap: explicitly disabled via build config 00:01:19.224 graph: explicitly disabled via build config 00:01:19.224 pdump: explicitly disabled via build config 00:01:19.224 proc-info: explicitly disabled via build config 00:01:19.224 test-acl: explicitly disabled via build config 00:01:19.224 test-bbdev: explicitly disabled via build config 00:01:19.224 test-cmdline: explicitly disabled via build config 00:01:19.224 test-compress-perf: explicitly disabled via build config 00:01:19.224 test-crypto-perf: explicitly disabled via build config 00:01:19.224 test-dma-perf: explicitly disabled via build config 00:01:19.224 test-eventdev: explicitly disabled via build config 00:01:19.224 test-fib: explicitly disabled via build config 00:01:19.224 test-flow-perf: explicitly disabled via build config 00:01:19.224 test-gpudev: explicitly disabled via build config 00:01:19.224 test-mldev: explicitly disabled via build config 00:01:19.224 test-pipeline: explicitly disabled via build config 00:01:19.224 test-pmd: explicitly disabled via build config 00:01:19.224 test-regex: explicitly disabled via build config 00:01:19.224 test-sad: explicitly disabled via build config 00:01:19.224 test-security-perf: explicitly disabled via build config 00:01:19.224 00:01:19.224 libs: 00:01:19.224 metrics: explicitly disabled via build config 00:01:19.224 acl: explicitly disabled via build config 00:01:19.224 bbdev: explicitly disabled via build config 00:01:19.224 bitratestats: explicitly disabled via build config 00:01:19.224 bpf: explicitly disabled via build config 00:01:19.224 cfgfile: explicitly disabled via build config 00:01:19.224 distributor: explicitly disabled via build config 00:01:19.224 efd: explicitly disabled via build config 00:01:19.224 eventdev: explicitly disabled via build config 00:01:19.224 dispatcher: explicitly disabled via build config 00:01:19.224 gpudev: explicitly disabled via build config 00:01:19.224 gro: explicitly disabled via build config 00:01:19.224 gso: explicitly disabled via build config 00:01:19.225 ip_frag: explicitly disabled via build config 00:01:19.225 jobstats: explicitly disabled via build config 00:01:19.225 latencystats: explicitly disabled via build config 00:01:19.225 lpm: explicitly disabled via build config 00:01:19.225 member: explicitly disabled via build config 00:01:19.225 pcapng: explicitly disabled via build config 00:01:19.225 rawdev: explicitly disabled via build config 00:01:19.225 regexdev: explicitly disabled via build config 00:01:19.225 mldev: explicitly disabled via build config 00:01:19.225 rib: explicitly disabled via build config 00:01:19.225 sched: explicitly disabled via build config 00:01:19.225 stack: explicitly disabled via build config 00:01:19.225 ipsec: explicitly disabled via build config 00:01:19.225 pdcp: explicitly disabled via build config 00:01:19.225 fib: explicitly disabled via build config 00:01:19.225 port: explicitly disabled via build config 00:01:19.225 pdump: explicitly disabled via build config 00:01:19.225 table: explicitly disabled via build config 00:01:19.225 pipeline: explicitly disabled via build config 00:01:19.225 graph: explicitly disabled via build config 00:01:19.225 node: explicitly disabled via build config 00:01:19.225 00:01:19.225 drivers: 00:01:19.225 common/cpt: not in enabled drivers build config 00:01:19.225 common/dpaax: not in enabled drivers build config 00:01:19.225 common/iavf: not in enabled drivers build config 00:01:19.225 common/idpf: not in enabled drivers build config 00:01:19.225 common/mvep: not in enabled drivers build config 00:01:19.225 common/octeontx: not in enabled drivers build config 00:01:19.225 bus/auxiliary: not in enabled drivers build config 00:01:19.225 bus/cdx: not in enabled drivers build config 00:01:19.225 bus/dpaa: not in enabled drivers build config 00:01:19.225 bus/fslmc: not in enabled drivers build config 00:01:19.225 bus/ifpga: not in enabled drivers build config 00:01:19.225 bus/platform: not in enabled drivers build config 00:01:19.225 bus/vmbus: not in enabled drivers build config 00:01:19.225 common/cnxk: not in enabled drivers build config 00:01:19.225 common/mlx5: not in enabled drivers build config 00:01:19.225 common/nfp: not in enabled drivers build config 00:01:19.225 common/qat: not in enabled drivers build config 00:01:19.225 common/sfc_efx: not in enabled drivers build config 00:01:19.225 mempool/bucket: not in enabled drivers build config 00:01:19.225 mempool/cnxk: not in enabled drivers build config 00:01:19.225 mempool/dpaa: not in enabled drivers build config 00:01:19.225 mempool/dpaa2: not in enabled drivers build config 00:01:19.225 mempool/octeontx: not in enabled drivers build config 00:01:19.225 mempool/stack: not in enabled drivers build config 00:01:19.225 dma/cnxk: not in enabled drivers build config 00:01:19.225 dma/dpaa: not in enabled drivers build config 00:01:19.225 dma/dpaa2: not in enabled drivers build config 00:01:19.225 dma/hisilicon: not in enabled drivers build config 00:01:19.225 dma/idxd: not in enabled drivers build config 00:01:19.225 dma/ioat: not in enabled drivers build config 00:01:19.225 dma/skeleton: not in enabled drivers build config 00:01:19.225 net/af_packet: not in enabled drivers build config 00:01:19.225 net/af_xdp: not in enabled drivers build config 00:01:19.225 net/ark: not in enabled drivers build config 00:01:19.225 net/atlantic: not in enabled drivers build config 00:01:19.225 net/avp: not in enabled drivers build config 00:01:19.225 net/axgbe: not in enabled drivers build config 00:01:19.225 net/bnx2x: not in enabled drivers build config 00:01:19.225 net/bnxt: not in enabled drivers build config 00:01:19.225 net/bonding: not in enabled drivers build config 00:01:19.225 net/cnxk: not in enabled drivers build config 00:01:19.225 net/cpfl: not in enabled drivers build config 00:01:19.225 net/cxgbe: not in enabled drivers build config 00:01:19.225 net/dpaa: not in enabled drivers build config 00:01:19.225 net/dpaa2: not in enabled drivers build config 00:01:19.225 net/e1000: not in enabled drivers build config 00:01:19.225 net/ena: not in enabled drivers build config 00:01:19.225 net/enetc: not in enabled drivers build config 00:01:19.225 net/enetfec: not in enabled drivers build config 00:01:19.225 net/enic: not in enabled drivers build config 00:01:19.225 net/failsafe: not in enabled drivers build config 00:01:19.225 net/fm10k: not in enabled drivers build config 00:01:19.225 net/gve: not in enabled drivers build config 00:01:19.225 net/hinic: not in enabled drivers build config 00:01:19.225 net/hns3: not in enabled drivers build config 00:01:19.225 net/i40e: not in enabled drivers build config 00:01:19.225 net/iavf: not in enabled drivers build config 00:01:19.225 net/ice: not in enabled drivers build config 00:01:19.225 net/idpf: not in enabled drivers build config 00:01:19.225 net/igc: not in enabled drivers build config 00:01:19.225 net/ionic: not in enabled drivers build config 00:01:19.225 net/ipn3ke: not in enabled drivers build config 00:01:19.225 net/ixgbe: not in enabled drivers build config 00:01:19.225 net/mana: not in enabled drivers build config 00:01:19.225 net/memif: not in enabled drivers build config 00:01:19.225 net/mlx4: not in enabled drivers build config 00:01:19.225 net/mlx5: not in enabled drivers build config 00:01:19.225 net/mvneta: not in enabled drivers build config 00:01:19.225 net/mvpp2: not in enabled drivers build config 00:01:19.225 net/netvsc: not in enabled drivers build config 00:01:19.225 net/nfb: not in enabled drivers build config 00:01:19.225 net/nfp: not in enabled drivers build config 00:01:19.225 net/ngbe: not in enabled drivers build config 00:01:19.225 net/null: not in enabled drivers build config 00:01:19.225 net/octeontx: not in enabled drivers build config 00:01:19.225 net/octeon_ep: not in enabled drivers build config 00:01:19.225 net/pcap: not in enabled drivers build config 00:01:19.225 net/pfe: not in enabled drivers build config 00:01:19.225 net/qede: not in enabled drivers build config 00:01:19.225 net/ring: not in enabled drivers build config 00:01:19.225 net/sfc: not in enabled drivers build config 00:01:19.225 net/softnic: not in enabled drivers build config 00:01:19.225 net/tap: not in enabled drivers build config 00:01:19.225 net/thunderx: not in enabled drivers build config 00:01:19.225 net/txgbe: not in enabled drivers build config 00:01:19.225 net/vdev_netvsc: not in enabled drivers build config 00:01:19.225 net/vhost: not in enabled drivers build config 00:01:19.225 net/virtio: not in enabled drivers build config 00:01:19.225 net/vmxnet3: not in enabled drivers build config 00:01:19.225 raw/*: missing internal dependency, "rawdev" 00:01:19.225 crypto/armv8: not in enabled drivers build config 00:01:19.225 crypto/bcmfs: not in enabled drivers build config 00:01:19.225 crypto/caam_jr: not in enabled drivers build config 00:01:19.225 crypto/ccp: not in enabled drivers build config 00:01:19.225 crypto/cnxk: not in enabled drivers build config 00:01:19.225 crypto/dpaa_sec: not in enabled drivers build config 00:01:19.225 crypto/dpaa2_sec: not in enabled drivers build config 00:01:19.225 crypto/ipsec_mb: not in enabled drivers build config 00:01:19.225 crypto/mlx5: not in enabled drivers build config 00:01:19.225 crypto/mvsam: not in enabled drivers build config 00:01:19.225 crypto/nitrox: not in enabled drivers build config 00:01:19.225 crypto/null: not in enabled drivers build config 00:01:19.225 crypto/octeontx: not in enabled drivers build config 00:01:19.225 crypto/openssl: not in enabled drivers build config 00:01:19.225 crypto/scheduler: not in enabled drivers build config 00:01:19.225 crypto/uadk: not in enabled drivers build config 00:01:19.225 crypto/virtio: not in enabled drivers build config 00:01:19.225 compress/isal: not in enabled drivers build config 00:01:19.225 compress/mlx5: not in enabled drivers build config 00:01:19.225 compress/octeontx: not in enabled drivers build config 00:01:19.225 compress/zlib: not in enabled drivers build config 00:01:19.225 regex/*: missing internal dependency, "regexdev" 00:01:19.225 ml/*: missing internal dependency, "mldev" 00:01:19.225 vdpa/ifc: not in enabled drivers build config 00:01:19.225 vdpa/mlx5: not in enabled drivers build config 00:01:19.225 vdpa/nfp: not in enabled drivers build config 00:01:19.225 vdpa/sfc: not in enabled drivers build config 00:01:19.225 event/*: missing internal dependency, "eventdev" 00:01:19.225 baseband/*: missing internal dependency, "bbdev" 00:01:19.225 gpu/*: missing internal dependency, "gpudev" 00:01:19.225 00:01:19.225 00:01:19.225 Build targets in project: 85 00:01:19.225 00:01:19.225 DPDK 23.11.0 00:01:19.225 00:01:19.225 User defined options 00:01:19.225 buildtype : debug 00:01:19.225 default_library : shared 00:01:19.225 libdir : lib 00:01:19.225 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:19.225 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:19.225 c_link_args : 00:01:19.225 cpu_instruction_set: native 00:01:19.225 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:19.225 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:19.225 enable_docs : false 00:01:19.225 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:19.225 enable_kmods : false 00:01:19.225 tests : false 00:01:19.225 00:01:19.225 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:19.225 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:19.225 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:19.225 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:19.225 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:19.225 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:19.225 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:19.225 [6/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:19.225 [7/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:19.225 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:19.225 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:19.225 [10/265] Linking static target lib/librte_kvargs.a 00:01:19.225 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:19.225 [12/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:19.225 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:19.225 [14/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:19.225 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:19.484 [16/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:19.484 [17/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:19.484 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:19.484 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:19.484 [20/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:19.484 [21/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:19.484 [22/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:19.484 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:19.484 [24/265] Linking static target lib/librte_log.a 00:01:19.484 [25/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:19.484 [26/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:19.484 [27/265] Linking static target lib/librte_pci.a 00:01:19.484 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:19.484 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:19.484 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:19.484 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:19.484 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:19.484 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:19.484 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:19.484 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:19.484 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:19.484 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:19.747 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:19.747 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:19.747 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:19.747 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:19.747 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:19.747 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:19.747 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:19.747 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:19.747 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:19.747 [47/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:19.747 [48/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:19.747 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:19.747 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:19.747 [51/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:19.747 [52/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:19.747 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:19.747 [54/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.747 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:19.747 [56/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:19.747 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:19.747 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:19.747 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:19.747 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:19.747 [61/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:19.747 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:19.747 [63/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:19.747 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:19.747 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:19.747 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:19.747 [67/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:19.747 [68/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:19.747 [69/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:19.747 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:19.747 [71/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:19.747 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:19.747 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:19.747 [74/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:19.747 [75/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:19.747 [76/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.747 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:19.747 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:19.747 [79/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:19.747 [80/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:19.747 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:19.747 [82/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:19.747 [83/265] Linking static target lib/librte_telemetry.a 00:01:19.747 [84/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:19.747 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:19.747 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:19.747 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:19.747 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:19.747 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:20.006 [90/265] Linking static target lib/librte_meter.a 00:01:20.006 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:20.006 [92/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:20.006 [93/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:20.006 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:20.006 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:20.006 [96/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:20.006 [97/265] Linking static target lib/librte_ring.a 00:01:20.006 [98/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:20.006 [99/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:20.006 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:20.006 [101/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:20.006 [102/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:20.006 [103/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:20.006 [104/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:20.006 [105/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:20.006 [106/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:20.006 [107/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:20.006 [108/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:20.006 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:20.006 [110/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:20.006 [111/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:20.006 [112/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:20.006 [113/265] Linking static target lib/librte_cmdline.a 00:01:20.006 [114/265] Linking static target lib/librte_mempool.a 00:01:20.006 [115/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:20.006 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:20.006 [117/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:20.006 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:20.006 [119/265] Linking static target lib/librte_net.a 00:01:20.006 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:20.006 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:20.006 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:20.006 [123/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:20.006 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:20.006 [125/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:20.006 [126/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:20.006 [127/265] Linking static target lib/librte_eal.a 00:01:20.006 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:20.006 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:20.006 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:20.006 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:20.006 [132/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:20.006 [133/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:20.006 [134/265] Linking static target lib/librte_timer.a 00:01:20.006 [135/265] Linking static target lib/librte_rcu.a 00:01:20.006 [136/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:20.006 [137/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:20.006 [138/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.006 [139/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.006 [140/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.006 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:20.006 [142/265] Linking target lib/librte_log.so.24.0 00:01:20.265 [143/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:20.265 [144/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:20.265 [145/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:20.265 [146/265] Linking static target lib/librte_compressdev.a 00:01:20.265 [147/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:20.265 [148/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:20.265 [149/265] Linking static target lib/librte_mbuf.a 00:01:20.265 [150/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:20.265 [151/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:20.265 [152/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:20.265 [153/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:20.265 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:20.265 [155/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:20.265 [156/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.265 [157/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:20.265 [158/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:20.265 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:20.265 [160/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.265 [161/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:20.265 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:20.265 [163/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:20.265 [164/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:20.265 [165/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:20.265 [166/265] Linking static target lib/librte_dmadev.a 00:01:20.265 [167/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:20.265 [168/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:20.265 [169/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:20.265 [170/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:20.265 [171/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:20.265 [172/265] Linking static target lib/librte_reorder.a 00:01:20.265 [173/265] Linking target lib/librte_kvargs.so.24.0 00:01:20.265 [174/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:20.265 [175/265] Linking target lib/librte_telemetry.so.24.0 00:01:20.265 [176/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:20.265 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:20.265 [178/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.265 [179/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:20.265 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:20.265 [181/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:20.265 [182/265] Linking static target lib/librte_security.a 00:01:20.265 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:20.265 [184/265] Linking static target lib/librte_power.a 00:01:20.265 [185/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:20.265 [186/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:20.265 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:20.265 [188/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:20.265 [189/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:20.524 [190/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.524 [191/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:20.524 [192/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:20.524 [193/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:20.524 [194/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:20.524 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:20.524 [196/265] Linking static target drivers/librte_bus_vdev.a 00:01:20.524 [197/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:20.524 [198/265] Linking static target lib/librte_hash.a 00:01:20.524 [199/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:20.524 [200/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.524 [201/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:20.524 [202/265] Linking static target drivers/librte_bus_pci.a 00:01:20.524 [203/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:20.524 [204/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:20.524 [205/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.524 [206/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:20.524 [207/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.524 [208/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.524 [209/265] Linking static target drivers/librte_mempool_ring.a 00:01:20.782 [210/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:20.782 [211/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.782 [212/265] Linking static target lib/librte_cryptodev.a 00:01:20.782 [213/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.782 [214/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.782 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.782 [216/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.782 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.782 [218/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.040 [219/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:21.040 [220/265] Linking static target lib/librte_ethdev.a 00:01:21.040 [221/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:21.040 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.040 [223/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.298 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.231 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:22.232 [226/265] Linking static target lib/librte_vhost.a 00:01:22.490 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.869 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.066 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.448 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.448 [231/265] Linking target lib/librte_eal.so.24.0 00:01:29.707 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:29.707 [233/265] Linking target lib/librte_ring.so.24.0 00:01:29.707 [234/265] Linking target lib/librte_meter.so.24.0 00:01:29.707 [235/265] Linking target lib/librte_pci.so.24.0 00:01:29.707 [236/265] Linking target lib/librte_dmadev.so.24.0 00:01:29.707 [237/265] Linking target lib/librte_timer.so.24.0 00:01:29.707 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:29.967 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:29.967 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:29.967 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:29.967 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:29.967 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:29.967 [244/265] Linking target lib/librte_mempool.so.24.0 00:01:29.967 [245/265] Linking target lib/librte_rcu.so.24.0 00:01:29.967 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:29.967 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:29.967 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:29.967 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:29.967 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:30.227 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:30.227 [252/265] Linking target lib/librte_reorder.so.24.0 00:01:30.227 [253/265] Linking target lib/librte_net.so.24.0 00:01:30.227 [254/265] Linking target lib/librte_compressdev.so.24.0 00:01:30.227 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:30.486 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:30.486 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:30.486 [258/265] Linking target lib/librte_cmdline.so.24.0 00:01:30.486 [259/265] Linking target lib/librte_security.so.24.0 00:01:30.486 [260/265] Linking target lib/librte_hash.so.24.0 00:01:30.486 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:30.486 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:30.486 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:30.486 [264/265] Linking target lib/librte_power.so.24.0 00:01:30.745 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:30.745 INFO: autodetecting backend as ninja 00:01:30.745 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:31.684 CC lib/log/log.o 00:01:31.684 CC lib/log/log_flags.o 00:01:31.684 CC lib/log/log_deprecated.o 00:01:31.684 CC lib/ut_mock/mock.o 00:01:31.684 CC lib/ut/ut.o 00:01:31.684 LIB libspdk_ut_mock.a 00:01:31.684 LIB libspdk_log.a 00:01:31.684 SO libspdk_ut_mock.so.6.0 00:01:31.684 LIB libspdk_ut.a 00:01:31.684 SO libspdk_log.so.7.0 00:01:31.684 SO libspdk_ut.so.2.0 00:01:31.684 SYMLINK libspdk_ut_mock.so 00:01:31.684 SYMLINK libspdk_log.so 00:01:31.684 SYMLINK libspdk_ut.so 00:01:31.944 CC lib/dma/dma.o 00:01:32.203 CC lib/util/base64.o 00:01:32.203 CC lib/util/bit_array.o 00:01:32.203 CC lib/util/cpuset.o 00:01:32.203 CC lib/util/crc32c.o 00:01:32.203 CC lib/util/crc16.o 00:01:32.203 CC lib/util/crc32.o 00:01:32.203 CC lib/util/crc64.o 00:01:32.203 CC lib/util/crc32_ieee.o 00:01:32.203 CC lib/util/dif.o 00:01:32.203 CC lib/util/fd.o 00:01:32.203 CC lib/ioat/ioat.o 00:01:32.203 CC lib/util/file.o 00:01:32.203 CC lib/util/hexlify.o 00:01:32.203 CC lib/util/iov.o 00:01:32.203 CC lib/util/math.o 00:01:32.203 CC lib/util/strerror_tls.o 00:01:32.203 CC lib/util/pipe.o 00:01:32.203 CC lib/util/string.o 00:01:32.203 CC lib/util/uuid.o 00:01:32.203 CC lib/util/xor.o 00:01:32.203 CC lib/util/fd_group.o 00:01:32.203 CC lib/util/zipf.o 00:01:32.203 CXX lib/trace_parser/trace.o 00:01:32.203 CC lib/vfio_user/host/vfio_user_pci.o 00:01:32.203 CC lib/vfio_user/host/vfio_user.o 00:01:32.203 LIB libspdk_dma.a 00:01:32.203 SO libspdk_dma.so.4.0 00:01:32.462 LIB libspdk_ioat.a 00:01:32.462 SYMLINK libspdk_dma.so 00:01:32.462 SO libspdk_ioat.so.7.0 00:01:32.462 LIB libspdk_vfio_user.a 00:01:32.462 SYMLINK libspdk_ioat.so 00:01:32.462 SO libspdk_vfio_user.so.5.0 00:01:32.463 LIB libspdk_util.a 00:01:32.463 SYMLINK libspdk_vfio_user.so 00:01:32.463 SO libspdk_util.so.9.0 00:01:32.722 SYMLINK libspdk_util.so 00:01:32.722 LIB libspdk_trace_parser.a 00:01:32.722 SO libspdk_trace_parser.so.5.0 00:01:32.982 SYMLINK libspdk_trace_parser.so 00:01:32.982 CC lib/conf/conf.o 00:01:32.982 CC lib/vmd/vmd.o 00:01:32.982 CC lib/vmd/led.o 00:01:32.982 CC lib/rdma/common.o 00:01:32.982 CC lib/rdma/rdma_verbs.o 00:01:32.982 CC lib/json/json_write.o 00:01:32.982 CC lib/json/json_parse.o 00:01:32.982 CC lib/json/json_util.o 00:01:32.982 CC lib/idxd/idxd.o 00:01:32.982 CC lib/env_dpdk/env.o 00:01:32.982 CC lib/idxd/idxd_user.o 00:01:32.982 CC lib/env_dpdk/memory.o 00:01:32.982 CC lib/env_dpdk/pci.o 00:01:32.982 CC lib/env_dpdk/init.o 00:01:32.982 CC lib/env_dpdk/threads.o 00:01:32.982 CC lib/env_dpdk/pci_ioat.o 00:01:32.982 CC lib/env_dpdk/pci_virtio.o 00:01:32.982 CC lib/env_dpdk/pci_vmd.o 00:01:32.982 CC lib/env_dpdk/pci_idxd.o 00:01:32.982 CC lib/env_dpdk/pci_dpdk.o 00:01:32.982 CC lib/env_dpdk/pci_event.o 00:01:32.982 CC lib/env_dpdk/sigbus_handler.o 00:01:32.982 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:32.982 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:33.242 LIB libspdk_conf.a 00:01:33.242 SO libspdk_conf.so.6.0 00:01:33.242 LIB libspdk_json.a 00:01:33.242 LIB libspdk_rdma.a 00:01:33.242 SYMLINK libspdk_conf.so 00:01:33.242 SO libspdk_json.so.6.0 00:01:33.242 SO libspdk_rdma.so.6.0 00:01:33.242 SYMLINK libspdk_json.so 00:01:33.242 SYMLINK libspdk_rdma.so 00:01:33.501 LIB libspdk_idxd.a 00:01:33.501 SO libspdk_idxd.so.12.0 00:01:33.501 LIB libspdk_vmd.a 00:01:33.501 SO libspdk_vmd.so.6.0 00:01:33.501 SYMLINK libspdk_idxd.so 00:01:33.502 SYMLINK libspdk_vmd.so 00:01:33.502 CC lib/jsonrpc/jsonrpc_server.o 00:01:33.502 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:33.502 CC lib/jsonrpc/jsonrpc_client.o 00:01:33.502 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:33.761 LIB libspdk_jsonrpc.a 00:01:33.761 SO libspdk_jsonrpc.so.6.0 00:01:33.761 SYMLINK libspdk_jsonrpc.so 00:01:34.020 LIB libspdk_env_dpdk.a 00:01:34.020 SO libspdk_env_dpdk.so.14.0 00:01:34.020 SYMLINK libspdk_env_dpdk.so 00:01:34.280 CC lib/rpc/rpc.o 00:01:34.280 LIB libspdk_rpc.a 00:01:34.280 SO libspdk_rpc.so.6.0 00:01:34.540 SYMLINK libspdk_rpc.so 00:01:34.799 CC lib/notify/notify.o 00:01:34.799 CC lib/notify/notify_rpc.o 00:01:34.799 CC lib/trace/trace.o 00:01:34.799 CC lib/trace/trace_flags.o 00:01:34.799 CC lib/trace/trace_rpc.o 00:01:34.799 CC lib/keyring/keyring_rpc.o 00:01:34.799 CC lib/keyring/keyring.o 00:01:34.799 LIB libspdk_notify.a 00:01:34.799 SO libspdk_notify.so.6.0 00:01:35.059 LIB libspdk_trace.a 00:01:35.059 LIB libspdk_keyring.a 00:01:35.059 SYMLINK libspdk_notify.so 00:01:35.059 SO libspdk_keyring.so.1.0 00:01:35.059 SO libspdk_trace.so.10.0 00:01:35.059 SYMLINK libspdk_keyring.so 00:01:35.059 SYMLINK libspdk_trace.so 00:01:35.319 CC lib/thread/thread.o 00:01:35.319 CC lib/thread/iobuf.o 00:01:35.319 CC lib/sock/sock.o 00:01:35.319 CC lib/sock/sock_rpc.o 00:01:35.579 LIB libspdk_sock.a 00:01:35.579 SO libspdk_sock.so.9.0 00:01:35.839 SYMLINK libspdk_sock.so 00:01:36.098 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:36.098 CC lib/nvme/nvme_ctrlr.o 00:01:36.098 CC lib/nvme/nvme_ns_cmd.o 00:01:36.098 CC lib/nvme/nvme_fabric.o 00:01:36.098 CC lib/nvme/nvme_pcie_common.o 00:01:36.098 CC lib/nvme/nvme_ns.o 00:01:36.098 CC lib/nvme/nvme_pcie.o 00:01:36.098 CC lib/nvme/nvme_quirks.o 00:01:36.098 CC lib/nvme/nvme_qpair.o 00:01:36.098 CC lib/nvme/nvme.o 00:01:36.098 CC lib/nvme/nvme_transport.o 00:01:36.098 CC lib/nvme/nvme_discovery.o 00:01:36.098 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:36.098 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:36.098 CC lib/nvme/nvme_tcp.o 00:01:36.098 CC lib/nvme/nvme_opal.o 00:01:36.098 CC lib/nvme/nvme_io_msg.o 00:01:36.098 CC lib/nvme/nvme_poll_group.o 00:01:36.098 CC lib/nvme/nvme_zns.o 00:01:36.098 CC lib/nvme/nvme_stubs.o 00:01:36.098 CC lib/nvme/nvme_auth.o 00:01:36.098 CC lib/nvme/nvme_cuse.o 00:01:36.098 CC lib/nvme/nvme_vfio_user.o 00:01:36.098 CC lib/nvme/nvme_rdma.o 00:01:36.357 LIB libspdk_thread.a 00:01:36.357 SO libspdk_thread.so.10.0 00:01:36.357 SYMLINK libspdk_thread.so 00:01:36.653 CC lib/blob/blobstore.o 00:01:36.653 CC lib/blob/zeroes.o 00:01:36.654 CC lib/blob/blob_bs_dev.o 00:01:36.654 CC lib/blob/request.o 00:01:36.654 CC lib/accel/accel.o 00:01:36.654 CC lib/accel/accel_sw.o 00:01:36.654 CC lib/accel/accel_rpc.o 00:01:36.654 CC lib/init/subsystem_rpc.o 00:01:36.654 CC lib/init/json_config.o 00:01:36.654 CC lib/init/subsystem.o 00:01:36.654 CC lib/init/rpc.o 00:01:36.654 CC lib/virtio/virtio.o 00:01:36.654 CC lib/virtio/virtio_vhost_user.o 00:01:36.654 CC lib/virtio/virtio_pci.o 00:01:36.654 CC lib/virtio/virtio_vfio_user.o 00:01:36.654 CC lib/vfu_tgt/tgt_endpoint.o 00:01:36.654 CC lib/vfu_tgt/tgt_rpc.o 00:01:36.980 LIB libspdk_init.a 00:01:36.980 SO libspdk_init.so.5.0 00:01:36.980 LIB libspdk_vfu_tgt.a 00:01:36.980 LIB libspdk_virtio.a 00:01:36.980 SO libspdk_vfu_tgt.so.3.0 00:01:36.980 SYMLINK libspdk_init.so 00:01:36.980 SO libspdk_virtio.so.7.0 00:01:36.980 SYMLINK libspdk_vfu_tgt.so 00:01:37.238 SYMLINK libspdk_virtio.so 00:01:37.238 CC lib/event/app.o 00:01:37.238 CC lib/event/reactor.o 00:01:37.238 CC lib/event/log_rpc.o 00:01:37.238 CC lib/event/app_rpc.o 00:01:37.238 CC lib/event/scheduler_static.o 00:01:37.496 LIB libspdk_accel.a 00:01:37.496 SO libspdk_accel.so.15.0 00:01:37.496 SYMLINK libspdk_accel.so 00:01:37.496 LIB libspdk_nvme.a 00:01:37.754 LIB libspdk_event.a 00:01:37.754 SO libspdk_nvme.so.13.0 00:01:37.754 SO libspdk_event.so.13.0 00:01:37.754 SYMLINK libspdk_event.so 00:01:37.754 CC lib/bdev/bdev.o 00:01:37.754 CC lib/bdev/bdev_rpc.o 00:01:37.754 CC lib/bdev/bdev_zone.o 00:01:37.754 CC lib/bdev/part.o 00:01:37.754 CC lib/bdev/scsi_nvme.o 00:01:38.013 SYMLINK libspdk_nvme.so 00:01:38.947 LIB libspdk_blob.a 00:01:38.947 SO libspdk_blob.so.11.0 00:01:38.947 SYMLINK libspdk_blob.so 00:01:39.205 CC lib/lvol/lvol.o 00:01:39.205 CC lib/blobfs/blobfs.o 00:01:39.205 CC lib/blobfs/tree.o 00:01:39.771 LIB libspdk_bdev.a 00:01:39.771 SO libspdk_bdev.so.15.0 00:01:39.771 LIB libspdk_blobfs.a 00:01:39.771 SO libspdk_blobfs.so.10.0 00:01:39.771 LIB libspdk_lvol.a 00:01:39.771 SYMLINK libspdk_bdev.so 00:01:39.771 SO libspdk_lvol.so.10.0 00:01:39.771 SYMLINK libspdk_blobfs.so 00:01:39.771 SYMLINK libspdk_lvol.so 00:01:40.030 CC lib/ftl/ftl_init.o 00:01:40.030 CC lib/ftl/ftl_core.o 00:01:40.030 CC lib/ftl/ftl_debug.o 00:01:40.030 CC lib/ftl/ftl_layout.o 00:01:40.030 CC lib/ftl/ftl_io.o 00:01:40.030 CC lib/ftl/ftl_sb.o 00:01:40.030 CC lib/ftl/ftl_l2p_flat.o 00:01:40.030 CC lib/ftl/ftl_l2p.o 00:01:40.030 CC lib/ftl/ftl_nv_cache.o 00:01:40.030 CC lib/ftl/ftl_band.o 00:01:40.030 CC lib/ftl/ftl_band_ops.o 00:01:40.030 CC lib/ftl/ftl_writer.o 00:01:40.030 CC lib/ftl/ftl_rq.o 00:01:40.030 CC lib/ftl/ftl_reloc.o 00:01:40.030 CC lib/ftl/ftl_l2p_cache.o 00:01:40.030 CC lib/ftl/ftl_p2l.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:40.030 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:40.030 CC lib/ftl/utils/ftl_conf.o 00:01:40.030 CC lib/ftl/utils/ftl_md.o 00:01:40.030 CC lib/ftl/utils/ftl_mempool.o 00:01:40.030 CC lib/ftl/utils/ftl_bitmap.o 00:01:40.030 CC lib/ftl/utils/ftl_property.o 00:01:40.030 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:40.030 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:40.030 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:40.030 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:40.030 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:40.030 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:40.030 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:40.030 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:40.030 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:40.030 CC lib/ftl/base/ftl_base_dev.o 00:01:40.030 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:40.030 CC lib/ftl/base/ftl_base_bdev.o 00:01:40.030 CC lib/ftl/ftl_trace.o 00:01:40.030 CC lib/nbd/nbd.o 00:01:40.030 CC lib/nbd/nbd_rpc.o 00:01:40.030 CC lib/scsi/lun.o 00:01:40.030 CC lib/scsi/dev.o 00:01:40.030 CC lib/scsi/port.o 00:01:40.030 CC lib/scsi/scsi_pr.o 00:01:40.030 CC lib/scsi/scsi.o 00:01:40.030 CC lib/scsi/scsi_bdev.o 00:01:40.030 CC lib/nvmf/ctrlr.o 00:01:40.030 CC lib/scsi/task.o 00:01:40.030 CC lib/nvmf/ctrlr_discovery.o 00:01:40.030 CC lib/scsi/scsi_rpc.o 00:01:40.030 CC lib/nvmf/ctrlr_bdev.o 00:01:40.030 CC lib/nvmf/nvmf_rpc.o 00:01:40.030 CC lib/nvmf/subsystem.o 00:01:40.030 CC lib/nvmf/nvmf.o 00:01:40.030 CC lib/nvmf/vfio_user.o 00:01:40.030 CC lib/nvmf/transport.o 00:01:40.030 CC lib/nvmf/tcp.o 00:01:40.030 CC lib/nvmf/rdma.o 00:01:40.030 CC lib/ublk/ublk_rpc.o 00:01:40.030 CC lib/ublk/ublk.o 00:01:40.596 LIB libspdk_nbd.a 00:01:40.596 SO libspdk_nbd.so.7.0 00:01:40.596 SYMLINK libspdk_nbd.so 00:01:40.596 LIB libspdk_scsi.a 00:01:40.854 SO libspdk_scsi.so.9.0 00:01:40.854 LIB libspdk_ublk.a 00:01:40.854 SO libspdk_ublk.so.3.0 00:01:40.854 SYMLINK libspdk_scsi.so 00:01:40.854 SYMLINK libspdk_ublk.so 00:01:40.854 LIB libspdk_ftl.a 00:01:41.112 SO libspdk_ftl.so.9.0 00:01:41.113 CC lib/iscsi/conn.o 00:01:41.113 CC lib/vhost/vhost.o 00:01:41.113 CC lib/vhost/vhost_rpc.o 00:01:41.113 CC lib/iscsi/init_grp.o 00:01:41.113 CC lib/vhost/vhost_scsi.o 00:01:41.113 CC lib/iscsi/iscsi.o 00:01:41.113 CC lib/vhost/rte_vhost_user.o 00:01:41.113 CC lib/vhost/vhost_blk.o 00:01:41.113 CC lib/iscsi/md5.o 00:01:41.113 CC lib/iscsi/param.o 00:01:41.113 CC lib/iscsi/portal_grp.o 00:01:41.113 CC lib/iscsi/tgt_node.o 00:01:41.113 CC lib/iscsi/iscsi_subsystem.o 00:01:41.113 CC lib/iscsi/iscsi_rpc.o 00:01:41.113 CC lib/iscsi/task.o 00:01:41.372 SYMLINK libspdk_ftl.so 00:01:41.940 LIB libspdk_nvmf.a 00:01:41.940 SO libspdk_nvmf.so.18.0 00:01:41.940 LIB libspdk_vhost.a 00:01:41.940 SO libspdk_vhost.so.8.0 00:01:41.940 SYMLINK libspdk_nvmf.so 00:01:41.940 SYMLINK libspdk_vhost.so 00:01:42.199 LIB libspdk_iscsi.a 00:01:42.199 SO libspdk_iscsi.so.8.0 00:01:42.199 SYMLINK libspdk_iscsi.so 00:01:42.767 CC module/env_dpdk/env_dpdk_rpc.o 00:01:42.767 CC module/vfu_device/vfu_virtio.o 00:01:42.767 CC module/vfu_device/vfu_virtio_blk.o 00:01:42.767 CC module/vfu_device/vfu_virtio_rpc.o 00:01:42.767 CC module/vfu_device/vfu_virtio_scsi.o 00:01:42.767 CC module/sock/posix/posix.o 00:01:42.767 CC module/accel/error/accel_error_rpc.o 00:01:42.767 CC module/accel/error/accel_error.o 00:01:42.767 CC module/keyring/file/keyring.o 00:01:42.767 CC module/keyring/file/keyring_rpc.o 00:01:42.767 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:42.767 CC module/accel/ioat/accel_ioat_rpc.o 00:01:42.767 CC module/accel/ioat/accel_ioat.o 00:01:42.767 LIB libspdk_env_dpdk_rpc.a 00:01:42.767 CC module/accel/dsa/accel_dsa.o 00:01:42.767 CC module/accel/dsa/accel_dsa_rpc.o 00:01:42.767 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:42.767 CC module/accel/iaa/accel_iaa_rpc.o 00:01:42.767 CC module/accel/iaa/accel_iaa.o 00:01:42.767 CC module/blob/bdev/blob_bdev.o 00:01:42.767 CC module/scheduler/gscheduler/gscheduler.o 00:01:43.026 SO libspdk_env_dpdk_rpc.so.6.0 00:01:43.026 SYMLINK libspdk_env_dpdk_rpc.so 00:01:43.026 LIB libspdk_keyring_file.a 00:01:43.026 LIB libspdk_accel_error.a 00:01:43.026 SO libspdk_keyring_file.so.1.0 00:01:43.026 LIB libspdk_scheduler_dpdk_governor.a 00:01:43.026 LIB libspdk_scheduler_gscheduler.a 00:01:43.026 LIB libspdk_accel_ioat.a 00:01:43.026 LIB libspdk_scheduler_dynamic.a 00:01:43.026 SO libspdk_accel_error.so.2.0 00:01:43.026 SO libspdk_scheduler_gscheduler.so.4.0 00:01:43.026 LIB libspdk_accel_iaa.a 00:01:43.026 SO libspdk_scheduler_dynamic.so.4.0 00:01:43.026 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:43.026 LIB libspdk_accel_dsa.a 00:01:43.026 SO libspdk_accel_ioat.so.6.0 00:01:43.026 SYMLINK libspdk_keyring_file.so 00:01:43.026 SO libspdk_accel_iaa.so.3.0 00:01:43.026 LIB libspdk_blob_bdev.a 00:01:43.026 SO libspdk_accel_dsa.so.5.0 00:01:43.026 SYMLINK libspdk_accel_error.so 00:01:43.026 SYMLINK libspdk_scheduler_gscheduler.so 00:01:43.026 SYMLINK libspdk_scheduler_dynamic.so 00:01:43.026 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:43.026 SYMLINK libspdk_accel_ioat.so 00:01:43.026 SO libspdk_blob_bdev.so.11.0 00:01:43.285 SYMLINK libspdk_accel_iaa.so 00:01:43.285 SYMLINK libspdk_accel_dsa.so 00:01:43.285 LIB libspdk_vfu_device.a 00:01:43.285 SYMLINK libspdk_blob_bdev.so 00:01:43.285 SO libspdk_vfu_device.so.3.0 00:01:43.285 SYMLINK libspdk_vfu_device.so 00:01:43.285 LIB libspdk_sock_posix.a 00:01:43.543 SO libspdk_sock_posix.so.6.0 00:01:43.543 SYMLINK libspdk_sock_posix.so 00:01:43.543 CC module/blobfs/bdev/blobfs_bdev.o 00:01:43.543 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:43.543 CC module/bdev/gpt/gpt.o 00:01:43.543 CC module/bdev/gpt/vbdev_gpt.o 00:01:43.543 CC module/bdev/error/vbdev_error_rpc.o 00:01:43.543 CC module/bdev/error/vbdev_error.o 00:01:43.543 CC module/bdev/malloc/bdev_malloc.o 00:01:43.543 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:43.543 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:43.543 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:43.543 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:43.543 CC module/bdev/passthru/vbdev_passthru.o 00:01:43.543 CC module/bdev/delay/vbdev_delay.o 00:01:43.543 CC module/bdev/raid/bdev_raid.o 00:01:43.543 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:43.543 CC module/bdev/raid/bdev_raid_rpc.o 00:01:43.544 CC module/bdev/split/vbdev_split.o 00:01:43.544 CC module/bdev/split/vbdev_split_rpc.o 00:01:43.544 CC module/bdev/raid/bdev_raid_sb.o 00:01:43.544 CC module/bdev/raid/raid0.o 00:01:43.544 CC module/bdev/raid/concat.o 00:01:43.544 CC module/bdev/raid/raid1.o 00:01:43.544 CC module/bdev/lvol/vbdev_lvol.o 00:01:43.544 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:43.544 CC module/bdev/iscsi/bdev_iscsi.o 00:01:43.544 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:43.802 CC module/bdev/nvme/bdev_nvme.o 00:01:43.802 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:43.802 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:43.802 CC module/bdev/ftl/bdev_ftl.o 00:01:43.802 CC module/bdev/nvme/nvme_rpc.o 00:01:43.802 CC module/bdev/nvme/vbdev_opal.o 00:01:43.802 CC module/bdev/nvme/bdev_mdns_client.o 00:01:43.802 CC module/bdev/aio/bdev_aio.o 00:01:43.802 CC module/bdev/aio/bdev_aio_rpc.o 00:01:43.802 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:43.802 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:43.802 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:43.802 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:43.802 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:43.802 CC module/bdev/null/bdev_null.o 00:01:43.802 CC module/bdev/null/bdev_null_rpc.o 00:01:43.802 LIB libspdk_blobfs_bdev.a 00:01:43.802 SO libspdk_blobfs_bdev.so.6.0 00:01:44.061 LIB libspdk_bdev_error.a 00:01:44.061 LIB libspdk_bdev_split.a 00:01:44.061 SYMLINK libspdk_blobfs_bdev.so 00:01:44.061 LIB libspdk_bdev_null.a 00:01:44.061 LIB libspdk_bdev_gpt.a 00:01:44.061 SO libspdk_bdev_error.so.6.0 00:01:44.061 LIB libspdk_bdev_ftl.a 00:01:44.061 SO libspdk_bdev_split.so.6.0 00:01:44.061 SO libspdk_bdev_null.so.6.0 00:01:44.061 LIB libspdk_bdev_zone_block.a 00:01:44.061 SO libspdk_bdev_ftl.so.6.0 00:01:44.061 LIB libspdk_bdev_passthru.a 00:01:44.061 SO libspdk_bdev_gpt.so.6.0 00:01:44.061 LIB libspdk_bdev_malloc.a 00:01:44.061 SYMLINK libspdk_bdev_error.so 00:01:44.061 LIB libspdk_bdev_iscsi.a 00:01:44.061 SO libspdk_bdev_zone_block.so.6.0 00:01:44.061 SO libspdk_bdev_passthru.so.6.0 00:01:44.061 LIB libspdk_bdev_aio.a 00:01:44.061 LIB libspdk_bdev_delay.a 00:01:44.061 SYMLINK libspdk_bdev_null.so 00:01:44.061 SYMLINK libspdk_bdev_split.so 00:01:44.061 SO libspdk_bdev_iscsi.so.6.0 00:01:44.061 SO libspdk_bdev_malloc.so.6.0 00:01:44.061 SO libspdk_bdev_delay.so.6.0 00:01:44.061 SYMLINK libspdk_bdev_ftl.so 00:01:44.061 SYMLINK libspdk_bdev_gpt.so 00:01:44.061 SO libspdk_bdev_aio.so.6.0 00:01:44.061 SYMLINK libspdk_bdev_passthru.so 00:01:44.061 SYMLINK libspdk_bdev_zone_block.so 00:01:44.061 SYMLINK libspdk_bdev_malloc.so 00:01:44.061 SYMLINK libspdk_bdev_iscsi.so 00:01:44.061 SYMLINK libspdk_bdev_delay.so 00:01:44.061 LIB libspdk_bdev_lvol.a 00:01:44.061 SYMLINK libspdk_bdev_aio.so 00:01:44.061 SO libspdk_bdev_lvol.so.6.0 00:01:44.061 LIB libspdk_bdev_virtio.a 00:01:44.320 SO libspdk_bdev_virtio.so.6.0 00:01:44.320 SYMLINK libspdk_bdev_lvol.so 00:01:44.320 SYMLINK libspdk_bdev_virtio.so 00:01:44.320 LIB libspdk_bdev_raid.a 00:01:44.579 SO libspdk_bdev_raid.so.6.0 00:01:44.579 SYMLINK libspdk_bdev_raid.so 00:01:45.147 LIB libspdk_bdev_nvme.a 00:01:45.405 SO libspdk_bdev_nvme.so.7.0 00:01:45.405 SYMLINK libspdk_bdev_nvme.so 00:01:45.971 CC module/event/subsystems/scheduler/scheduler.o 00:01:45.971 CC module/event/subsystems/vmd/vmd.o 00:01:45.971 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:45.971 CC module/event/subsystems/sock/sock.o 00:01:45.971 CC module/event/subsystems/iobuf/iobuf.o 00:01:45.971 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:45.971 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:45.971 CC module/event/subsystems/keyring/keyring.o 00:01:45.971 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:46.230 LIB libspdk_event_scheduler.a 00:01:46.230 LIB libspdk_event_keyring.a 00:01:46.230 LIB libspdk_event_vmd.a 00:01:46.230 SO libspdk_event_scheduler.so.4.0 00:01:46.230 LIB libspdk_event_vfu_tgt.a 00:01:46.230 LIB libspdk_event_sock.a 00:01:46.230 LIB libspdk_event_vhost_blk.a 00:01:46.230 SO libspdk_event_keyring.so.1.0 00:01:46.230 SO libspdk_event_vmd.so.6.0 00:01:46.230 LIB libspdk_event_iobuf.a 00:01:46.230 SO libspdk_event_vhost_blk.so.3.0 00:01:46.230 SO libspdk_event_vfu_tgt.so.3.0 00:01:46.230 SO libspdk_event_sock.so.5.0 00:01:46.230 SO libspdk_event_iobuf.so.3.0 00:01:46.230 SYMLINK libspdk_event_scheduler.so 00:01:46.230 SYMLINK libspdk_event_keyring.so 00:01:46.230 SYMLINK libspdk_event_vmd.so 00:01:46.230 SYMLINK libspdk_event_vfu_tgt.so 00:01:46.230 SYMLINK libspdk_event_sock.so 00:01:46.230 SYMLINK libspdk_event_vhost_blk.so 00:01:46.230 SYMLINK libspdk_event_iobuf.so 00:01:46.490 CC module/event/subsystems/accel/accel.o 00:01:46.749 LIB libspdk_event_accel.a 00:01:46.749 SO libspdk_event_accel.so.6.0 00:01:46.749 SYMLINK libspdk_event_accel.so 00:01:47.009 CC module/event/subsystems/bdev/bdev.o 00:01:47.268 LIB libspdk_event_bdev.a 00:01:47.268 SO libspdk_event_bdev.so.6.0 00:01:47.268 SYMLINK libspdk_event_bdev.so 00:01:47.540 CC module/event/subsystems/nbd/nbd.o 00:01:47.541 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:47.541 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:47.541 CC module/event/subsystems/ublk/ublk.o 00:01:47.541 CC module/event/subsystems/scsi/scsi.o 00:01:47.804 LIB libspdk_event_nbd.a 00:01:47.804 SO libspdk_event_nbd.so.6.0 00:01:47.804 LIB libspdk_event_ublk.a 00:01:47.804 LIB libspdk_event_scsi.a 00:01:47.804 SO libspdk_event_ublk.so.3.0 00:01:47.804 LIB libspdk_event_nvmf.a 00:01:47.804 SYMLINK libspdk_event_nbd.so 00:01:47.804 SO libspdk_event_scsi.so.6.0 00:01:47.804 SYMLINK libspdk_event_ublk.so 00:01:47.804 SO libspdk_event_nvmf.so.6.0 00:01:47.804 SYMLINK libspdk_event_scsi.so 00:01:47.804 SYMLINK libspdk_event_nvmf.so 00:01:48.062 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:48.062 CC module/event/subsystems/iscsi/iscsi.o 00:01:48.321 LIB libspdk_event_vhost_scsi.a 00:01:48.321 LIB libspdk_event_iscsi.a 00:01:48.321 SO libspdk_event_vhost_scsi.so.3.0 00:01:48.321 SO libspdk_event_iscsi.so.6.0 00:01:48.321 SYMLINK libspdk_event_vhost_scsi.so 00:01:48.321 SYMLINK libspdk_event_iscsi.so 00:01:48.580 SO libspdk.so.6.0 00:01:48.580 SYMLINK libspdk.so 00:01:48.845 TEST_HEADER include/spdk/accel.h 00:01:48.845 TEST_HEADER include/spdk/accel_module.h 00:01:48.845 TEST_HEADER include/spdk/assert.h 00:01:48.845 TEST_HEADER include/spdk/bdev.h 00:01:48.845 TEST_HEADER include/spdk/barrier.h 00:01:48.845 CC test/rpc_client/rpc_client_test.o 00:01:48.845 TEST_HEADER include/spdk/base64.h 00:01:48.845 TEST_HEADER include/spdk/bdev_module.h 00:01:48.845 TEST_HEADER include/spdk/bdev_zone.h 00:01:48.845 TEST_HEADER include/spdk/bit_array.h 00:01:48.845 TEST_HEADER include/spdk/blob_bdev.h 00:01:48.845 TEST_HEADER include/spdk/bit_pool.h 00:01:48.845 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:48.845 TEST_HEADER include/spdk/blobfs.h 00:01:48.845 CC app/spdk_lspci/spdk_lspci.o 00:01:48.845 TEST_HEADER include/spdk/blob.h 00:01:48.845 TEST_HEADER include/spdk/conf.h 00:01:48.845 CC app/trace_record/trace_record.o 00:01:48.845 TEST_HEADER include/spdk/crc16.h 00:01:48.845 TEST_HEADER include/spdk/config.h 00:01:48.845 TEST_HEADER include/spdk/crc32.h 00:01:48.846 TEST_HEADER include/spdk/crc64.h 00:01:48.846 TEST_HEADER include/spdk/cpuset.h 00:01:48.846 CC app/spdk_top/spdk_top.o 00:01:48.846 TEST_HEADER include/spdk/dif.h 00:01:48.846 TEST_HEADER include/spdk/endian.h 00:01:48.846 TEST_HEADER include/spdk/dma.h 00:01:48.846 TEST_HEADER include/spdk/env_dpdk.h 00:01:48.846 TEST_HEADER include/spdk/env.h 00:01:48.846 TEST_HEADER include/spdk/fd_group.h 00:01:48.846 TEST_HEADER include/spdk/event.h 00:01:48.846 TEST_HEADER include/spdk/fd.h 00:01:48.846 TEST_HEADER include/spdk/file.h 00:01:48.846 CC app/spdk_nvme_perf/perf.o 00:01:48.846 TEST_HEADER include/spdk/gpt_spec.h 00:01:48.846 CXX app/trace/trace.o 00:01:48.846 TEST_HEADER include/spdk/hexlify.h 00:01:48.846 TEST_HEADER include/spdk/ftl.h 00:01:48.846 TEST_HEADER include/spdk/histogram_data.h 00:01:48.846 TEST_HEADER include/spdk/idxd.h 00:01:48.846 TEST_HEADER include/spdk/idxd_spec.h 00:01:48.846 TEST_HEADER include/spdk/init.h 00:01:48.846 TEST_HEADER include/spdk/ioat.h 00:01:48.846 TEST_HEADER include/spdk/ioat_spec.h 00:01:48.846 TEST_HEADER include/spdk/json.h 00:01:48.846 TEST_HEADER include/spdk/iscsi_spec.h 00:01:48.846 TEST_HEADER include/spdk/jsonrpc.h 00:01:48.846 TEST_HEADER include/spdk/likely.h 00:01:48.846 CC app/spdk_nvme_identify/identify.o 00:01:48.846 TEST_HEADER include/spdk/keyring_module.h 00:01:48.846 TEST_HEADER include/spdk/keyring.h 00:01:48.846 TEST_HEADER include/spdk/log.h 00:01:48.846 CC app/spdk_nvme_discover/discovery_aer.o 00:01:48.846 TEST_HEADER include/spdk/memory.h 00:01:48.846 TEST_HEADER include/spdk/mmio.h 00:01:48.846 TEST_HEADER include/spdk/nbd.h 00:01:48.846 TEST_HEADER include/spdk/notify.h 00:01:48.846 TEST_HEADER include/spdk/lvol.h 00:01:48.846 TEST_HEADER include/spdk/nvme.h 00:01:48.846 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:48.846 TEST_HEADER include/spdk/nvme_intel.h 00:01:48.846 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:48.846 TEST_HEADER include/spdk/nvme_spec.h 00:01:48.846 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:48.846 TEST_HEADER include/spdk/nvme_zns.h 00:01:48.846 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:48.846 TEST_HEADER include/spdk/nvmf_spec.h 00:01:48.846 TEST_HEADER include/spdk/nvmf.h 00:01:48.846 TEST_HEADER include/spdk/opal.h 00:01:48.846 TEST_HEADER include/spdk/nvmf_transport.h 00:01:48.846 TEST_HEADER include/spdk/pci_ids.h 00:01:48.846 TEST_HEADER include/spdk/opal_spec.h 00:01:48.846 TEST_HEADER include/spdk/queue.h 00:01:48.846 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:48.846 TEST_HEADER include/spdk/pipe.h 00:01:48.846 TEST_HEADER include/spdk/rpc.h 00:01:48.846 TEST_HEADER include/spdk/reduce.h 00:01:48.846 TEST_HEADER include/spdk/scsi_spec.h 00:01:48.846 TEST_HEADER include/spdk/scheduler.h 00:01:48.846 TEST_HEADER include/spdk/scsi.h 00:01:48.846 TEST_HEADER include/spdk/sock.h 00:01:48.846 TEST_HEADER include/spdk/string.h 00:01:48.846 TEST_HEADER include/spdk/thread.h 00:01:48.846 TEST_HEADER include/spdk/stdinc.h 00:01:48.846 TEST_HEADER include/spdk/tree.h 00:01:48.846 CC app/nvmf_tgt/nvmf_main.o 00:01:48.846 TEST_HEADER include/spdk/trace.h 00:01:48.846 TEST_HEADER include/spdk/trace_parser.h 00:01:48.846 TEST_HEADER include/spdk/util.h 00:01:48.846 TEST_HEADER include/spdk/ublk.h 00:01:48.846 TEST_HEADER include/spdk/uuid.h 00:01:48.846 CC app/spdk_dd/spdk_dd.o 00:01:48.846 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:48.846 TEST_HEADER include/spdk/version.h 00:01:48.846 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:48.846 TEST_HEADER include/spdk/vhost.h 00:01:48.846 TEST_HEADER include/spdk/xor.h 00:01:48.846 TEST_HEADER include/spdk/vmd.h 00:01:48.846 CC app/iscsi_tgt/iscsi_tgt.o 00:01:48.846 TEST_HEADER include/spdk/zipf.h 00:01:48.846 CXX test/cpp_headers/accel_module.o 00:01:48.846 CXX test/cpp_headers/accel.o 00:01:48.846 CXX test/cpp_headers/assert.o 00:01:48.846 CXX test/cpp_headers/base64.o 00:01:48.846 CXX test/cpp_headers/barrier.o 00:01:48.846 CXX test/cpp_headers/bdev.o 00:01:48.846 CXX test/cpp_headers/bdev_module.o 00:01:48.846 CXX test/cpp_headers/bit_array.o 00:01:48.846 CXX test/cpp_headers/bdev_zone.o 00:01:48.846 CXX test/cpp_headers/bit_pool.o 00:01:48.846 CXX test/cpp_headers/blob.o 00:01:48.846 CXX test/cpp_headers/blob_bdev.o 00:01:48.846 CXX test/cpp_headers/blobfs_bdev.o 00:01:48.846 CXX test/cpp_headers/blobfs.o 00:01:48.846 CC app/vhost/vhost.o 00:01:48.846 CXX test/cpp_headers/conf.o 00:01:48.846 CXX test/cpp_headers/cpuset.o 00:01:48.846 CXX test/cpp_headers/crc32.o 00:01:48.846 CXX test/cpp_headers/config.o 00:01:48.846 CXX test/cpp_headers/crc16.o 00:01:48.846 CXX test/cpp_headers/crc64.o 00:01:48.846 CXX test/cpp_headers/dif.o 00:01:49.111 CC app/spdk_tgt/spdk_tgt.o 00:01:49.111 CXX test/cpp_headers/dma.o 00:01:49.111 CC examples/idxd/perf/perf.o 00:01:49.111 CC test/event/reactor/reactor.o 00:01:49.111 CC test/env/vtophys/vtophys.o 00:01:49.111 CC test/accel/dif/dif.o 00:01:49.111 CC test/env/pci/pci_ut.o 00:01:49.111 CC examples/blob/cli/blobcli.o 00:01:49.111 CC app/fio/nvme/fio_plugin.o 00:01:49.111 CC test/nvme/reset/reset.o 00:01:49.111 CC test/nvme/startup/startup.o 00:01:49.111 CC examples/vmd/lsvmd/lsvmd.o 00:01:49.111 CC examples/vmd/led/led.o 00:01:49.111 CC test/nvme/cuse/cuse.o 00:01:49.111 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:49.111 CC examples/accel/perf/accel_perf.o 00:01:49.111 CC test/nvme/simple_copy/simple_copy.o 00:01:49.111 CC test/nvme/connect_stress/connect_stress.o 00:01:49.111 CC test/event/app_repeat/app_repeat.o 00:01:49.111 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:49.111 CC examples/ioat/verify/verify.o 00:01:49.111 CC examples/ioat/perf/perf.o 00:01:49.111 CC test/app/histogram_perf/histogram_perf.o 00:01:49.111 CC test/nvme/reserve/reserve.o 00:01:49.111 CC test/event/event_perf/event_perf.o 00:01:49.111 CC examples/nvme/hotplug/hotplug.o 00:01:49.111 CC test/nvme/fused_ordering/fused_ordering.o 00:01:49.111 CC test/nvme/e2edp/nvme_dp.o 00:01:49.111 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:49.111 CC test/nvme/sgl/sgl.o 00:01:49.111 CC test/event/reactor_perf/reactor_perf.o 00:01:49.111 CC test/nvme/compliance/nvme_compliance.o 00:01:49.111 CC test/nvme/overhead/overhead.o 00:01:49.111 CC examples/util/zipf/zipf.o 00:01:49.111 CC test/env/memory/memory_ut.o 00:01:49.111 CC test/event/scheduler/scheduler.o 00:01:49.111 CC test/app/jsoncat/jsoncat.o 00:01:49.111 CC test/dma/test_dma/test_dma.o 00:01:49.111 CC test/nvme/fdp/fdp.o 00:01:49.111 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:49.111 CC test/nvme/boot_partition/boot_partition.o 00:01:49.111 CC test/nvme/aer/aer.o 00:01:49.111 CC examples/nvme/hello_world/hello_world.o 00:01:49.111 CC test/bdev/bdevio/bdevio.o 00:01:49.111 CC examples/nvme/abort/abort.o 00:01:49.111 CC examples/sock/hello_world/hello_sock.o 00:01:49.111 CC examples/blob/hello_world/hello_blob.o 00:01:49.111 CC test/app/stub/stub.o 00:01:49.111 CC examples/nvme/arbitration/arbitration.o 00:01:49.111 LINK spdk_lspci 00:01:49.111 CC test/nvme/err_injection/err_injection.o 00:01:49.111 CC examples/bdev/bdevperf/bdevperf.o 00:01:49.111 CC app/fio/bdev/fio_plugin.o 00:01:49.111 CC test/blobfs/mkfs/mkfs.o 00:01:49.111 CC test/thread/poller_perf/poller_perf.o 00:01:49.111 CC examples/bdev/hello_world/hello_bdev.o 00:01:49.111 CC examples/nvme/reconnect/reconnect.o 00:01:49.111 CC examples/nvmf/nvmf/nvmf.o 00:01:49.111 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:49.111 CC examples/thread/thread/thread_ex.o 00:01:49.377 CC test/app/bdev_svc/bdev_svc.o 00:01:49.377 LINK rpc_client_test 00:01:49.377 LINK spdk_nvme_discover 00:01:49.378 LINK spdk_trace_record 00:01:49.378 LINK vhost 00:01:49.378 LINK reactor 00:01:49.378 LINK vtophys 00:01:49.378 LINK lsvmd 00:01:49.378 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:49.378 CXX test/cpp_headers/endian.o 00:01:49.378 CC test/env/mem_callbacks/mem_callbacks.o 00:01:49.378 CXX test/cpp_headers/env_dpdk.o 00:01:49.378 CXX test/cpp_headers/env.o 00:01:49.378 CXX test/cpp_headers/event.o 00:01:49.638 LINK event_perf 00:01:49.638 LINK interrupt_tgt 00:01:49.638 CC test/lvol/esnap/esnap.o 00:01:49.638 CXX test/cpp_headers/fd_group.o 00:01:49.638 LINK startup 00:01:49.638 LINK spdk_tgt 00:01:49.638 LINK histogram_perf 00:01:49.638 CXX test/cpp_headers/fd.o 00:01:49.638 CXX test/cpp_headers/file.o 00:01:49.638 LINK nvmf_tgt 00:01:49.638 CXX test/cpp_headers/ftl.o 00:01:49.638 LINK boot_partition 00:01:49.638 CXX test/cpp_headers/gpt_spec.o 00:01:49.638 LINK cmb_copy 00:01:49.638 LINK connect_stress 00:01:49.638 CXX test/cpp_headers/hexlify.o 00:01:49.638 CXX test/cpp_headers/histogram_data.o 00:01:49.638 CXX test/cpp_headers/idxd.o 00:01:49.638 LINK fused_ordering 00:01:49.638 LINK reserve 00:01:49.638 LINK mkfs 00:01:49.638 LINK bdev_svc 00:01:49.638 LINK reset 00:01:49.638 CXX test/cpp_headers/idxd_spec.o 00:01:49.638 LINK led 00:01:49.638 LINK jsoncat 00:01:49.638 CXX test/cpp_headers/init.o 00:01:49.638 LINK reactor_perf 00:01:49.638 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:49.638 LINK nvme_dp 00:01:49.638 LINK zipf 00:01:49.638 LINK env_dpdk_post_init 00:01:49.638 LINK iscsi_tgt 00:01:49.638 LINK app_repeat 00:01:49.638 LINK pmr_persistence 00:01:49.638 LINK poller_perf 00:01:49.638 LINK aer 00:01:49.638 CXX test/cpp_headers/ioat.o 00:01:49.638 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:49.638 CXX test/cpp_headers/ioat_spec.o 00:01:49.638 CXX test/cpp_headers/iscsi_spec.o 00:01:49.638 LINK spdk_trace 00:01:49.638 CXX test/cpp_headers/json.o 00:01:49.638 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:49.638 CXX test/cpp_headers/jsonrpc.o 00:01:49.638 CXX test/cpp_headers/keyring.o 00:01:49.638 CXX test/cpp_headers/keyring_module.o 00:01:49.638 LINK stub 00:01:49.638 LINK err_injection 00:01:49.638 LINK thread 00:01:49.638 CXX test/cpp_headers/likely.o 00:01:49.900 LINK idxd_perf 00:01:49.900 CXX test/cpp_headers/log.o 00:01:49.900 LINK doorbell_aers 00:01:49.900 CXX test/cpp_headers/lvol.o 00:01:49.900 LINK scheduler 00:01:49.900 LINK dif 00:01:49.900 LINK ioat_perf 00:01:49.900 CXX test/cpp_headers/memory.o 00:01:49.900 LINK hotplug 00:01:49.900 CXX test/cpp_headers/mmio.o 00:01:49.900 CXX test/cpp_headers/nbd.o 00:01:49.900 LINK simple_copy 00:01:49.900 LINK verify 00:01:49.900 LINK hello_world 00:01:49.900 LINK pci_ut 00:01:49.900 CXX test/cpp_headers/notify.o 00:01:49.900 LINK hello_bdev 00:01:49.900 LINK hello_blob 00:01:49.900 CXX test/cpp_headers/nvme.o 00:01:49.900 CXX test/cpp_headers/nvme_intel.o 00:01:49.900 LINK sgl 00:01:49.900 CXX test/cpp_headers/nvme_ocssd.o 00:01:49.900 LINK arbitration 00:01:49.900 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:49.900 CXX test/cpp_headers/nvme_spec.o 00:01:49.900 LINK hello_sock 00:01:49.900 LINK bdevio 00:01:49.900 LINK spdk_dd 00:01:49.900 CXX test/cpp_headers/nvme_zns.o 00:01:49.900 LINK accel_perf 00:01:49.900 CXX test/cpp_headers/nvmf_cmd.o 00:01:49.900 LINK overhead 00:01:49.900 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:49.901 CXX test/cpp_headers/nvmf.o 00:01:49.901 CXX test/cpp_headers/nvmf_spec.o 00:01:49.901 CXX test/cpp_headers/nvmf_transport.o 00:01:49.901 CXX test/cpp_headers/opal.o 00:01:49.901 CXX test/cpp_headers/opal_spec.o 00:01:49.901 CXX test/cpp_headers/pci_ids.o 00:01:49.901 CXX test/cpp_headers/pipe.o 00:01:49.901 CXX test/cpp_headers/queue.o 00:01:49.901 LINK nvmf 00:01:49.901 LINK nvme_compliance 00:01:49.901 CXX test/cpp_headers/reduce.o 00:01:49.901 LINK fdp 00:01:49.901 CXX test/cpp_headers/rpc.o 00:01:49.901 CXX test/cpp_headers/scheduler.o 00:01:49.901 CXX test/cpp_headers/scsi.o 00:01:49.901 CXX test/cpp_headers/scsi_spec.o 00:01:49.901 CXX test/cpp_headers/sock.o 00:01:49.901 CXX test/cpp_headers/stdinc.o 00:01:49.901 CXX test/cpp_headers/string.o 00:01:49.901 CXX test/cpp_headers/thread.o 00:01:49.901 CXX test/cpp_headers/trace.o 00:01:49.901 LINK spdk_nvme 00:01:49.901 CXX test/cpp_headers/trace_parser.o 00:01:50.160 CXX test/cpp_headers/tree.o 00:01:50.160 LINK test_dma 00:01:50.160 CXX test/cpp_headers/ublk.o 00:01:50.160 CXX test/cpp_headers/util.o 00:01:50.160 CXX test/cpp_headers/uuid.o 00:01:50.160 LINK reconnect 00:01:50.160 CXX test/cpp_headers/version.o 00:01:50.160 CXX test/cpp_headers/vfio_user_spec.o 00:01:50.160 CXX test/cpp_headers/vfio_user_pci.o 00:01:50.160 CXX test/cpp_headers/vhost.o 00:01:50.160 LINK spdk_bdev 00:01:50.160 CXX test/cpp_headers/vmd.o 00:01:50.160 CXX test/cpp_headers/zipf.o 00:01:50.160 CXX test/cpp_headers/xor.o 00:01:50.160 LINK abort 00:01:50.160 LINK nvme_fuzz 00:01:50.160 LINK spdk_nvme_perf 00:01:50.160 LINK blobcli 00:01:50.160 LINK spdk_nvme_identify 00:01:50.160 LINK spdk_top 00:01:50.427 LINK nvme_manage 00:01:50.427 LINK mem_callbacks 00:01:50.427 LINK vhost_fuzz 00:01:50.427 LINK cuse 00:01:50.427 LINK bdevperf 00:01:50.694 LINK memory_ut 00:01:51.263 LINK iscsi_fuzz 00:01:53.172 LINK esnap 00:01:53.172 00:01:53.172 real 0m42.257s 00:01:53.172 user 6m30.191s 00:01:53.172 sys 3m34.932s 00:01:53.172 00:35:45 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:53.172 00:35:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.172 ************************************ 00:01:53.172 END TEST make 00:01:53.172 ************************************ 00:01:53.432 00:35:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:53.432 00:35:45 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:53.432 00:35:45 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:53.432 00:35:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.432 00:35:45 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:53.432 00:35:45 -- pm/common@45 -- $ pid=1399585 00:01:53.432 00:35:45 -- pm/common@52 -- $ sudo kill -TERM 1399585 00:01:53.432 00:35:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.432 00:35:45 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:53.432 00:35:45 -- pm/common@45 -- $ pid=1399588 00:01:53.432 00:35:45 -- pm/common@52 -- $ sudo kill -TERM 1399588 00:01:53.432 00:35:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.432 00:35:45 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:53.432 00:35:45 -- pm/common@45 -- $ pid=1399591 00:01:53.432 00:35:45 -- pm/common@52 -- $ sudo kill -TERM 1399591 00:01:53.432 00:35:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.432 00:35:45 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:53.432 00:35:45 -- pm/common@45 -- $ pid=1399592 00:01:53.432 00:35:45 -- pm/common@52 -- $ sudo kill -TERM 1399592 00:01:53.432 00:35:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:53.432 00:35:46 -- nvmf/common.sh@7 -- # uname -s 00:01:53.432 00:35:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:53.432 00:35:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:53.432 00:35:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:53.432 00:35:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:53.432 00:35:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:53.693 00:35:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:53.693 00:35:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:53.693 00:35:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:53.693 00:35:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:53.693 00:35:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:53.693 00:35:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:53.693 00:35:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:53.693 00:35:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:53.693 00:35:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:53.693 00:35:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:53.693 00:35:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:53.693 00:35:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:53.693 00:35:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:53.693 00:35:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.693 00:35:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.693 00:35:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.693 00:35:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.693 00:35:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.693 00:35:46 -- paths/export.sh@5 -- # export PATH 00:01:53.693 00:35:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.693 00:35:46 -- nvmf/common.sh@47 -- # : 0 00:01:53.693 00:35:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:53.693 00:35:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:53.693 00:35:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:53.693 00:35:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:53.693 00:35:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:53.693 00:35:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:53.693 00:35:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:53.693 00:35:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:53.693 00:35:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:53.693 00:35:46 -- spdk/autotest.sh@32 -- # uname -s 00:01:53.693 00:35:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:53.693 00:35:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:53.693 00:35:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:53.693 00:35:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:53.693 00:35:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:53.693 00:35:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:53.693 00:35:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:53.693 00:35:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:53.693 00:35:46 -- spdk/autotest.sh@48 -- # udevadm_pid=1457529 00:01:53.693 00:35:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:53.693 00:35:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:53.693 00:35:46 -- pm/common@17 -- # local monitor 00:01:53.693 00:35:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.693 00:35:46 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1457530 00:01:53.693 00:35:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.693 00:35:46 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1457532 00:01:53.693 00:35:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.693 00:35:46 -- pm/common@21 -- # date +%s 00:01:53.693 00:35:46 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1457535 00:01:53.693 00:35:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.693 00:35:46 -- pm/common@21 -- # date +%s 00:01:53.693 00:35:46 -- pm/common@21 -- # date +%s 00:01:53.693 00:35:46 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=1457539 00:01:53.693 00:35:46 -- pm/common@26 -- # sleep 1 00:01:53.693 00:35:46 -- pm/common@21 -- # date +%s 00:01:53.693 00:35:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714170946 00:01:53.693 00:35:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714170946 00:01:53.693 00:35:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714170946 00:01:53.693 00:35:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714170946 00:01:53.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714170946_collect-vmstat.pm.log 00:01:53.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714170946_collect-cpu-temp.pm.log 00:01:53.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714170946_collect-bmc-pm.bmc.pm.log 00:01:53.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714170946_collect-cpu-load.pm.log 00:01:54.634 00:35:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:54.634 00:35:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:54.634 00:35:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:54.634 00:35:47 -- common/autotest_common.sh@10 -- # set +x 00:01:54.634 00:35:47 -- spdk/autotest.sh@59 -- # create_test_list 00:01:54.634 00:35:47 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:54.634 00:35:47 -- common/autotest_common.sh@10 -- # set +x 00:01:54.634 00:35:47 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:54.634 00:35:47 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.634 00:35:47 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.634 00:35:47 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:54.634 00:35:47 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:54.634 00:35:47 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:54.634 00:35:47 -- common/autotest_common.sh@1441 -- # uname 00:01:54.634 00:35:47 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:54.634 00:35:47 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:54.634 00:35:47 -- common/autotest_common.sh@1461 -- # uname 00:01:54.634 00:35:47 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:54.634 00:35:47 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:54.634 00:35:47 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:54.634 00:35:47 -- spdk/autotest.sh@72 -- # hash lcov 00:01:54.634 00:35:47 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:54.634 00:35:47 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:54.634 --rc lcov_branch_coverage=1 00:01:54.634 --rc lcov_function_coverage=1 00:01:54.634 --rc genhtml_branch_coverage=1 00:01:54.634 --rc genhtml_function_coverage=1 00:01:54.634 --rc genhtml_legend=1 00:01:54.634 --rc geninfo_all_blocks=1 00:01:54.634 ' 00:01:54.634 00:35:47 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:54.634 --rc lcov_branch_coverage=1 00:01:54.634 --rc lcov_function_coverage=1 00:01:54.634 --rc genhtml_branch_coverage=1 00:01:54.634 --rc genhtml_function_coverage=1 00:01:54.634 --rc genhtml_legend=1 00:01:54.634 --rc geninfo_all_blocks=1 00:01:54.634 ' 00:01:54.634 00:35:47 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:54.634 --rc lcov_branch_coverage=1 00:01:54.634 --rc lcov_function_coverage=1 00:01:54.634 --rc genhtml_branch_coverage=1 00:01:54.634 --rc genhtml_function_coverage=1 00:01:54.634 --rc genhtml_legend=1 00:01:54.634 --rc geninfo_all_blocks=1 00:01:54.634 --no-external' 00:01:54.634 00:35:47 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:54.634 --rc lcov_branch_coverage=1 00:01:54.634 --rc lcov_function_coverage=1 00:01:54.634 --rc genhtml_branch_coverage=1 00:01:54.634 --rc genhtml_function_coverage=1 00:01:54.634 --rc genhtml_legend=1 00:01:54.634 --rc geninfo_all_blocks=1 00:01:54.634 --no-external' 00:01:54.634 00:35:47 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:54.634 lcov: LCOV version 1.14 00:01:54.634 00:35:47 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:01.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:01.215 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:01.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:01.216 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:01.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:01.217 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:01.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:01.217 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:01.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:01.217 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:03.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:03.759 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:10.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:10.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:10.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:10.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:10.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:10.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:17.012 00:36:08 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:17.012 00:36:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:17.012 00:36:08 -- common/autotest_common.sh@10 -- # set +x 00:02:17.012 00:36:08 -- spdk/autotest.sh@91 -- # rm -f 00:02:17.012 00:36:08 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:18.392 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:18.392 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:18.651 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:18.651 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:18.651 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:18.651 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:18.651 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:18.651 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:18.651 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:18.651 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:18.651 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:18.652 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:18.652 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:18.652 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:18.652 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:18.911 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:18.911 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:18.911 00:36:11 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:18.911 00:36:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:18.911 00:36:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:18.911 00:36:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:18.911 00:36:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:18.911 00:36:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:18.911 00:36:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:18.911 00:36:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:18.911 00:36:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:18.911 00:36:11 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:18.911 00:36:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:18.911 00:36:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:18.911 00:36:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:18.911 00:36:11 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:18.911 00:36:11 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:18.911 No valid GPT data, bailing 00:02:18.911 00:36:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:18.911 00:36:11 -- scripts/common.sh@391 -- # pt= 00:02:18.911 00:36:11 -- scripts/common.sh@392 -- # return 1 00:02:18.911 00:36:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:18.911 1+0 records in 00:02:18.911 1+0 records out 00:02:18.911 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00571392 s, 184 MB/s 00:02:18.911 00:36:11 -- spdk/autotest.sh@118 -- # sync 00:02:18.911 00:36:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:18.911 00:36:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:18.911 00:36:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:24.186 00:36:16 -- spdk/autotest.sh@124 -- # uname -s 00:02:24.186 00:36:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:24.186 00:36:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:24.186 00:36:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:24.186 00:36:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:24.186 00:36:16 -- common/autotest_common.sh@10 -- # set +x 00:02:24.186 ************************************ 00:02:24.186 START TEST setup.sh 00:02:24.186 ************************************ 00:02:24.186 00:36:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:24.186 * Looking for test storage... 00:02:24.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:24.186 00:36:16 -- setup/test-setup.sh@10 -- # uname -s 00:02:24.186 00:36:16 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:24.186 00:36:16 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:24.186 00:36:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:24.186 00:36:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:24.186 00:36:16 -- common/autotest_common.sh@10 -- # set +x 00:02:24.186 ************************************ 00:02:24.186 START TEST acl 00:02:24.186 ************************************ 00:02:24.186 00:36:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:24.186 * Looking for test storage... 00:02:24.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:24.186 00:36:16 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:24.186 00:36:16 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:24.186 00:36:16 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:24.186 00:36:16 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:24.186 00:36:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:24.186 00:36:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:24.186 00:36:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:24.186 00:36:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:24.186 00:36:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:24.186 00:36:16 -- setup/acl.sh@12 -- # devs=() 00:02:24.186 00:36:16 -- setup/acl.sh@12 -- # declare -a devs 00:02:24.186 00:36:16 -- setup/acl.sh@13 -- # drivers=() 00:02:24.186 00:36:16 -- setup/acl.sh@13 -- # declare -A drivers 00:02:24.186 00:36:16 -- setup/acl.sh@51 -- # setup reset 00:02:24.186 00:36:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:24.186 00:36:16 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:27.473 00:36:19 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:27.473 00:36:19 -- setup/acl.sh@16 -- # local dev driver 00:02:27.473 00:36:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:27.473 00:36:19 -- setup/acl.sh@15 -- # setup output status 00:02:27.473 00:36:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:27.473 00:36:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:29.376 Hugepages 00:02:29.376 node hugesize free / total 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:02:29.376 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:29.376 00:36:21 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:29.376 00:36:21 -- setup/acl.sh@20 -- # continue 00:02:29.376 00:36:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:29.376 00:36:21 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:29.376 00:36:21 -- setup/acl.sh@54 -- # run_test denied denied 00:02:29.376 00:36:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:29.376 00:36:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:29.376 00:36:21 -- common/autotest_common.sh@10 -- # set +x 00:02:29.635 ************************************ 00:02:29.635 START TEST denied 00:02:29.635 ************************************ 00:02:29.635 00:36:22 -- common/autotest_common.sh@1111 -- # denied 00:02:29.635 00:36:22 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:29.635 00:36:22 -- setup/acl.sh@38 -- # setup output config 00:02:29.635 00:36:22 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:29.635 00:36:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.635 00:36:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:32.921 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:32.921 00:36:24 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:32.921 00:36:24 -- setup/acl.sh@28 -- # local dev driver 00:02:32.921 00:36:24 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:32.921 00:36:24 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:32.921 00:36:24 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:32.921 00:36:24 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:32.921 00:36:24 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:32.921 00:36:24 -- setup/acl.sh@41 -- # setup reset 00:02:32.921 00:36:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:32.921 00:36:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:36.358 00:02:36.358 real 0m6.603s 00:02:36.358 user 0m2.181s 00:02:36.358 sys 0m3.726s 00:02:36.358 00:36:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:36.358 00:36:28 -- common/autotest_common.sh@10 -- # set +x 00:02:36.358 ************************************ 00:02:36.358 END TEST denied 00:02:36.358 ************************************ 00:02:36.358 00:36:28 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:36.358 00:36:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:36.358 00:36:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:36.358 00:36:28 -- common/autotest_common.sh@10 -- # set +x 00:02:36.358 ************************************ 00:02:36.358 START TEST allowed 00:02:36.358 ************************************ 00:02:36.358 00:36:28 -- common/autotest_common.sh@1111 -- # allowed 00:02:36.358 00:36:28 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:36.358 00:36:28 -- setup/acl.sh@45 -- # setup output config 00:02:36.358 00:36:28 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:36.358 00:36:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.358 00:36:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:40.547 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:40.547 00:36:32 -- setup/acl.sh@47 -- # verify 00:02:40.547 00:36:32 -- setup/acl.sh@28 -- # local dev driver 00:02:40.547 00:36:32 -- setup/acl.sh@48 -- # setup reset 00:02:40.547 00:36:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:40.547 00:36:32 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.082 00:02:43.082 real 0m6.872s 00:02:43.082 user 0m2.197s 00:02:43.082 sys 0m3.824s 00:02:43.082 00:36:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:43.082 00:36:35 -- common/autotest_common.sh@10 -- # set +x 00:02:43.082 ************************************ 00:02:43.082 END TEST allowed 00:02:43.082 ************************************ 00:02:43.082 00:02:43.082 real 0m19.155s 00:02:43.082 user 0m6.362s 00:02:43.082 sys 0m11.199s 00:02:43.082 00:36:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:43.082 00:36:35 -- common/autotest_common.sh@10 -- # set +x 00:02:43.082 ************************************ 00:02:43.082 END TEST acl 00:02:43.082 ************************************ 00:02:43.342 00:36:35 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:43.342 00:36:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:43.342 00:36:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:43.342 00:36:35 -- common/autotest_common.sh@10 -- # set +x 00:02:43.342 ************************************ 00:02:43.342 START TEST hugepages 00:02:43.342 ************************************ 00:02:43.342 00:36:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:43.342 * Looking for test storage... 00:02:43.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:43.342 00:36:36 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:43.342 00:36:36 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:43.342 00:36:36 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:43.342 00:36:36 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:43.342 00:36:36 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:43.342 00:36:36 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:43.342 00:36:36 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:43.342 00:36:36 -- setup/common.sh@18 -- # local node= 00:02:43.342 00:36:36 -- setup/common.sh@19 -- # local var val 00:02:43.342 00:36:36 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.342 00:36:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.342 00:36:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.342 00:36:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.342 00:36:36 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.342 00:36:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.342 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.342 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.342 00:36:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 168850764 kB' 'MemAvailable: 172680396 kB' 'Buffers: 3888 kB' 'Cached: 14207528 kB' 'SwapCached: 0 kB' 'Active: 11175128 kB' 'Inactive: 3663216 kB' 'Active(anon): 10114356 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 630660 kB' 'Mapped: 206716 kB' 'Shmem: 9487428 kB' 'KReclaimable: 499720 kB' 'Slab: 1133180 kB' 'SReclaimable: 499720 kB' 'SUnreclaim: 633460 kB' 'KernelStack: 20624 kB' 'PageTables: 10092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982040 kB' 'Committed_AS: 11630364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316100 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:43.342 00:36:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.342 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.342 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.342 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.342 00:36:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.342 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.342 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.342 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.342 00:36:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.342 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.342 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.342 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.342 00:36:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.342 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.342 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.343 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.343 00:36:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.603 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.603 00:36:36 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:43.603 00:36:36 -- setup/common.sh@33 -- # echo 2048 00:02:43.603 00:36:36 -- setup/common.sh@33 -- # return 0 00:02:43.603 00:36:36 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:43.603 00:36:36 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:43.603 00:36:36 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:43.603 00:36:36 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:43.603 00:36:36 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:43.603 00:36:36 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:43.603 00:36:36 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:43.603 00:36:36 -- setup/hugepages.sh@207 -- # get_nodes 00:02:43.603 00:36:36 -- setup/hugepages.sh@27 -- # local node 00:02:43.603 00:36:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.603 00:36:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:43.603 00:36:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.603 00:36:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:43.603 00:36:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.603 00:36:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.603 00:36:36 -- setup/hugepages.sh@208 -- # clear_hp 00:02:43.603 00:36:36 -- setup/hugepages.sh@37 -- # local node hp 00:02:43.603 00:36:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:43.603 00:36:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.603 00:36:36 -- setup/hugepages.sh@41 -- # echo 0 00:02:43.603 00:36:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.603 00:36:36 -- setup/hugepages.sh@41 -- # echo 0 00:02:43.603 00:36:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:43.603 00:36:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.603 00:36:36 -- setup/hugepages.sh@41 -- # echo 0 00:02:43.603 00:36:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:43.603 00:36:36 -- setup/hugepages.sh@41 -- # echo 0 00:02:43.603 00:36:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:43.603 00:36:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:43.603 00:36:36 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:43.603 00:36:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:43.603 00:36:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:43.603 00:36:36 -- common/autotest_common.sh@10 -- # set +x 00:02:43.603 ************************************ 00:02:43.603 START TEST default_setup 00:02:43.603 ************************************ 00:02:43.603 00:36:36 -- common/autotest_common.sh@1111 -- # default_setup 00:02:43.603 00:36:36 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:43.603 00:36:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:43.603 00:36:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:43.603 00:36:36 -- setup/hugepages.sh@51 -- # shift 00:02:43.603 00:36:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:43.603 00:36:36 -- setup/hugepages.sh@52 -- # local node_ids 00:02:43.603 00:36:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:43.603 00:36:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:43.604 00:36:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:43.604 00:36:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:43.604 00:36:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:43.604 00:36:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:43.604 00:36:36 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:43.604 00:36:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:43.604 00:36:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:43.604 00:36:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:43.604 00:36:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:43.604 00:36:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:43.604 00:36:36 -- setup/hugepages.sh@73 -- # return 0 00:02:43.604 00:36:36 -- setup/hugepages.sh@137 -- # setup output 00:02:43.604 00:36:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.604 00:36:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:46.139 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:46.398 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:47.339 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:47.339 00:36:39 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:47.339 00:36:39 -- setup/hugepages.sh@89 -- # local node 00:02:47.339 00:36:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:47.339 00:36:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:47.339 00:36:39 -- setup/hugepages.sh@92 -- # local surp 00:02:47.339 00:36:39 -- setup/hugepages.sh@93 -- # local resv 00:02:47.339 00:36:39 -- setup/hugepages.sh@94 -- # local anon 00:02:47.339 00:36:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:47.339 00:36:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:47.339 00:36:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:47.339 00:36:39 -- setup/common.sh@18 -- # local node= 00:02:47.339 00:36:39 -- setup/common.sh@19 -- # local var val 00:02:47.339 00:36:39 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.339 00:36:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.339 00:36:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.339 00:36:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.339 00:36:39 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.339 00:36:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171013828 kB' 'MemAvailable: 174843408 kB' 'Buffers: 3888 kB' 'Cached: 14207632 kB' 'SwapCached: 0 kB' 'Active: 11192456 kB' 'Inactive: 3663216 kB' 'Active(anon): 10131684 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647548 kB' 'Mapped: 206688 kB' 'Shmem: 9487532 kB' 'KReclaimable: 499616 kB' 'Slab: 1132156 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 632540 kB' 'KernelStack: 20816 kB' 'PageTables: 9944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11649384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316228 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.339 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.339 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.340 00:36:39 -- setup/common.sh@33 -- # echo 0 00:02:47.340 00:36:39 -- setup/common.sh@33 -- # return 0 00:02:47.340 00:36:39 -- setup/hugepages.sh@97 -- # anon=0 00:02:47.340 00:36:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:47.340 00:36:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.340 00:36:39 -- setup/common.sh@18 -- # local node= 00:02:47.340 00:36:39 -- setup/common.sh@19 -- # local var val 00:02:47.340 00:36:39 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.340 00:36:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.340 00:36:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.340 00:36:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.340 00:36:39 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.340 00:36:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.340 00:36:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171014548 kB' 'MemAvailable: 174844128 kB' 'Buffers: 3888 kB' 'Cached: 14207636 kB' 'SwapCached: 0 kB' 'Active: 11191692 kB' 'Inactive: 3663216 kB' 'Active(anon): 10130920 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646656 kB' 'Mapped: 206756 kB' 'Shmem: 9487536 kB' 'KReclaimable: 499616 kB' 'Slab: 1132140 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 632524 kB' 'KernelStack: 20736 kB' 'PageTables: 10452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11649396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316196 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.340 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.340 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.341 00:36:39 -- setup/common.sh@33 -- # echo 0 00:02:47.341 00:36:39 -- setup/common.sh@33 -- # return 0 00:02:47.341 00:36:39 -- setup/hugepages.sh@99 -- # surp=0 00:02:47.341 00:36:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:47.341 00:36:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:47.341 00:36:39 -- setup/common.sh@18 -- # local node= 00:02:47.341 00:36:39 -- setup/common.sh@19 -- # local var val 00:02:47.341 00:36:39 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.341 00:36:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.341 00:36:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.341 00:36:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.341 00:36:39 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.341 00:36:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.341 00:36:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171012996 kB' 'MemAvailable: 174842576 kB' 'Buffers: 3888 kB' 'Cached: 14207648 kB' 'SwapCached: 0 kB' 'Active: 11191528 kB' 'Inactive: 3663216 kB' 'Active(anon): 10130756 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646432 kB' 'Mapped: 206688 kB' 'Shmem: 9487548 kB' 'KReclaimable: 499616 kB' 'Slab: 1132096 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 632480 kB' 'KernelStack: 20592 kB' 'PageTables: 10024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11648016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316308 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.341 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.341 00:36:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:39 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.342 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.342 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.343 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.343 00:36:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.343 00:36:40 -- setup/common.sh@33 -- # echo 0 00:02:47.343 00:36:40 -- setup/common.sh@33 -- # return 0 00:02:47.343 00:36:40 -- setup/hugepages.sh@100 -- # resv=0 00:02:47.343 00:36:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:47.343 nr_hugepages=1024 00:02:47.343 00:36:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:47.343 resv_hugepages=0 00:02:47.343 00:36:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:47.343 surplus_hugepages=0 00:02:47.343 00:36:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:47.343 anon_hugepages=0 00:02:47.343 00:36:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.343 00:36:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:47.343 00:36:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:47.343 00:36:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:47.343 00:36:40 -- setup/common.sh@18 -- # local node= 00:02:47.343 00:36:40 -- setup/common.sh@19 -- # local var val 00:02:47.343 00:36:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.343 00:36:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.343 00:36:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.343 00:36:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.343 00:36:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.343 00:36:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171017208 kB' 'MemAvailable: 174846788 kB' 'Buffers: 3888 kB' 'Cached: 14207660 kB' 'SwapCached: 0 kB' 'Active: 11192852 kB' 'Inactive: 3663216 kB' 'Active(anon): 10132080 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648060 kB' 'Mapped: 206688 kB' 'Shmem: 9487560 kB' 'KReclaimable: 499616 kB' 'Slab: 1132096 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 632480 kB' 'KernelStack: 20688 kB' 'PageTables: 10048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11649424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316228 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.605 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.605 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.606 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.606 00:36:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.606 00:36:40 -- setup/common.sh@33 -- # echo 1024 00:02:47.606 00:36:40 -- setup/common.sh@33 -- # return 0 00:02:47.606 00:36:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.606 00:36:40 -- setup/hugepages.sh@112 -- # get_nodes 00:02:47.606 00:36:40 -- setup/hugepages.sh@27 -- # local node 00:02:47.606 00:36:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.606 00:36:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:47.606 00:36:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.606 00:36:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:47.606 00:36:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.606 00:36:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.606 00:36:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.606 00:36:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.606 00:36:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:47.606 00:36:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.606 00:36:40 -- setup/common.sh@18 -- # local node=0 00:02:47.606 00:36:40 -- setup/common.sh@19 -- # local var val 00:02:47.606 00:36:40 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.606 00:36:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.606 00:36:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:47.607 00:36:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:47.607 00:36:40 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.607 00:36:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90942368 kB' 'MemUsed: 6673260 kB' 'SwapCached: 0 kB' 'Active: 3198984 kB' 'Inactive: 135680 kB' 'Active(anon): 2756392 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 135680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2836604 kB' 'Mapped: 90540 kB' 'AnonPages: 501508 kB' 'Shmem: 2258332 kB' 'KernelStack: 13464 kB' 'PageTables: 7068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 269356 kB' 'Slab: 562976 kB' 'SReclaimable: 269356 kB' 'SUnreclaim: 293620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # continue 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.607 00:36:40 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.607 00:36:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.607 00:36:40 -- setup/common.sh@33 -- # echo 0 00:02:47.607 00:36:40 -- setup/common.sh@33 -- # return 0 00:02:47.607 00:36:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.608 00:36:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.608 00:36:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.608 00:36:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.608 00:36:40 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:47.608 node0=1024 expecting 1024 00:02:47.608 00:36:40 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:47.608 00:02:47.608 real 0m3.889s 00:02:47.608 user 0m1.224s 00:02:47.608 sys 0m1.901s 00:02:47.608 00:36:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:47.608 00:36:40 -- common/autotest_common.sh@10 -- # set +x 00:02:47.608 ************************************ 00:02:47.608 END TEST default_setup 00:02:47.608 ************************************ 00:02:47.608 00:36:40 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:47.608 00:36:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:47.608 00:36:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:47.608 00:36:40 -- common/autotest_common.sh@10 -- # set +x 00:02:47.608 ************************************ 00:02:47.608 START TEST per_node_1G_alloc 00:02:47.608 ************************************ 00:02:47.608 00:36:40 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:47.608 00:36:40 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:47.608 00:36:40 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:47.608 00:36:40 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:47.608 00:36:40 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:47.608 00:36:40 -- setup/hugepages.sh@51 -- # shift 00:02:47.608 00:36:40 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:47.608 00:36:40 -- setup/hugepages.sh@52 -- # local node_ids 00:02:47.608 00:36:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.608 00:36:40 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:47.608 00:36:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:47.608 00:36:40 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:47.608 00:36:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.608 00:36:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:47.608 00:36:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.608 00:36:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.608 00:36:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.608 00:36:40 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:47.608 00:36:40 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.608 00:36:40 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:47.608 00:36:40 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.608 00:36:40 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:47.608 00:36:40 -- setup/hugepages.sh@73 -- # return 0 00:02:47.608 00:36:40 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:47.608 00:36:40 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:47.608 00:36:40 -- setup/hugepages.sh@146 -- # setup output 00:02:47.608 00:36:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.608 00:36:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.145 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:50.145 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:50.145 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:50.408 00:36:42 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:50.408 00:36:42 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:50.408 00:36:42 -- setup/hugepages.sh@89 -- # local node 00:02:50.408 00:36:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:50.408 00:36:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:50.408 00:36:42 -- setup/hugepages.sh@92 -- # local surp 00:02:50.408 00:36:42 -- setup/hugepages.sh@93 -- # local resv 00:02:50.408 00:36:42 -- setup/hugepages.sh@94 -- # local anon 00:02:50.408 00:36:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:50.408 00:36:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:50.408 00:36:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:50.408 00:36:42 -- setup/common.sh@18 -- # local node= 00:02:50.408 00:36:42 -- setup/common.sh@19 -- # local var val 00:02:50.408 00:36:42 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.408 00:36:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.408 00:36:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.408 00:36:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.408 00:36:42 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.408 00:36:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171026632 kB' 'MemAvailable: 174856212 kB' 'Buffers: 3888 kB' 'Cached: 14207748 kB' 'SwapCached: 0 kB' 'Active: 11197268 kB' 'Inactive: 3663216 kB' 'Active(anon): 10136496 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 652316 kB' 'Mapped: 207560 kB' 'Shmem: 9487648 kB' 'KReclaimable: 499616 kB' 'Slab: 1131956 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 632340 kB' 'KernelStack: 20608 kB' 'PageTables: 9876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11653592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316200 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.408 00:36:42 -- setup/common.sh@33 -- # echo 0 00:02:50.408 00:36:42 -- setup/common.sh@33 -- # return 0 00:02:50.408 00:36:42 -- setup/hugepages.sh@97 -- # anon=0 00:02:50.408 00:36:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:50.408 00:36:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.408 00:36:42 -- setup/common.sh@18 -- # local node= 00:02:50.408 00:36:42 -- setup/common.sh@19 -- # local var val 00:02:50.408 00:36:42 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.408 00:36:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.408 00:36:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.408 00:36:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.408 00:36:42 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.408 00:36:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171033520 kB' 'MemAvailable: 174863100 kB' 'Buffers: 3888 kB' 'Cached: 14207748 kB' 'SwapCached: 0 kB' 'Active: 11191684 kB' 'Inactive: 3663216 kB' 'Active(anon): 10130912 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646712 kB' 'Mapped: 207072 kB' 'Shmem: 9487648 kB' 'KReclaimable: 499616 kB' 'Slab: 1132000 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 632384 kB' 'KernelStack: 20656 kB' 'PageTables: 10020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11647484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316164 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.408 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.408 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:42 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:42 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.409 00:36:43 -- setup/common.sh@33 -- # echo 0 00:02:50.409 00:36:43 -- setup/common.sh@33 -- # return 0 00:02:50.409 00:36:43 -- setup/hugepages.sh@99 -- # surp=0 00:02:50.409 00:36:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:50.409 00:36:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:50.409 00:36:43 -- setup/common.sh@18 -- # local node= 00:02:50.409 00:36:43 -- setup/common.sh@19 -- # local var val 00:02:50.409 00:36:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.409 00:36:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.409 00:36:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.409 00:36:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.409 00:36:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.409 00:36:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171034700 kB' 'MemAvailable: 174864280 kB' 'Buffers: 3888 kB' 'Cached: 14207760 kB' 'SwapCached: 0 kB' 'Active: 11191272 kB' 'Inactive: 3663216 kB' 'Active(anon): 10130500 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646272 kB' 'Mapped: 206680 kB' 'Shmem: 9487660 kB' 'KReclaimable: 499616 kB' 'Slab: 1132000 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 632384 kB' 'KernelStack: 20640 kB' 'PageTables: 9948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11647496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316164 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.409 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.409 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.410 00:36:43 -- setup/common.sh@33 -- # echo 0 00:02:50.410 00:36:43 -- setup/common.sh@33 -- # return 0 00:02:50.410 00:36:43 -- setup/hugepages.sh@100 -- # resv=0 00:02:50.410 00:36:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:50.410 nr_hugepages=1024 00:02:50.410 00:36:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:50.410 resv_hugepages=0 00:02:50.410 00:36:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:50.410 surplus_hugepages=0 00:02:50.410 00:36:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:50.410 anon_hugepages=0 00:02:50.410 00:36:43 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.410 00:36:43 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:50.410 00:36:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:50.410 00:36:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:50.410 00:36:43 -- setup/common.sh@18 -- # local node= 00:02:50.410 00:36:43 -- setup/common.sh@19 -- # local var val 00:02:50.410 00:36:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.410 00:36:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.410 00:36:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.410 00:36:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.410 00:36:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.410 00:36:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171035048 kB' 'MemAvailable: 174864628 kB' 'Buffers: 3888 kB' 'Cached: 14207776 kB' 'SwapCached: 0 kB' 'Active: 11191292 kB' 'Inactive: 3663216 kB' 'Active(anon): 10130520 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646276 kB' 'Mapped: 206680 kB' 'Shmem: 9487676 kB' 'KReclaimable: 499616 kB' 'Slab: 1132000 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 632384 kB' 'KernelStack: 20640 kB' 'PageTables: 9948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11647512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316164 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.410 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.410 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.411 00:36:43 -- setup/common.sh@33 -- # echo 1024 00:02:50.411 00:36:43 -- setup/common.sh@33 -- # return 0 00:02:50.411 00:36:43 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.411 00:36:43 -- setup/hugepages.sh@112 -- # get_nodes 00:02:50.411 00:36:43 -- setup/hugepages.sh@27 -- # local node 00:02:50.411 00:36:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.411 00:36:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.411 00:36:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.411 00:36:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.411 00:36:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.411 00:36:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.411 00:36:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.411 00:36:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.411 00:36:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:50.411 00:36:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.411 00:36:43 -- setup/common.sh@18 -- # local node=0 00:02:50.411 00:36:43 -- setup/common.sh@19 -- # local var val 00:02:50.411 00:36:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.411 00:36:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.411 00:36:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:50.411 00:36:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:50.411 00:36:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.411 00:36:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92003808 kB' 'MemUsed: 5611820 kB' 'SwapCached: 0 kB' 'Active: 3195420 kB' 'Inactive: 135680 kB' 'Active(anon): 2752828 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 135680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2836612 kB' 'Mapped: 90516 kB' 'AnonPages: 497720 kB' 'Shmem: 2258340 kB' 'KernelStack: 13352 kB' 'PageTables: 6144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 269356 kB' 'Slab: 563020 kB' 'SReclaimable: 269356 kB' 'SUnreclaim: 293664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.411 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.411 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.411 00:36:43 -- setup/common.sh@33 -- # echo 0 00:02:50.411 00:36:43 -- setup/common.sh@33 -- # return 0 00:02:50.411 00:36:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.411 00:36:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.411 00:36:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.671 00:36:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:50.671 00:36:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.671 00:36:43 -- setup/common.sh@18 -- # local node=1 00:02:50.671 00:36:43 -- setup/common.sh@19 -- # local var val 00:02:50.671 00:36:43 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.671 00:36:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.671 00:36:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:50.671 00:36:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:50.671 00:36:43 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.671 00:36:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.671 00:36:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 79031848 kB' 'MemUsed: 14733704 kB' 'SwapCached: 0 kB' 'Active: 7995912 kB' 'Inactive: 3527536 kB' 'Active(anon): 7377732 kB' 'Inactive(anon): 0 kB' 'Active(file): 618180 kB' 'Inactive(file): 3527536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11375080 kB' 'Mapped: 116164 kB' 'AnonPages: 148512 kB' 'Shmem: 7229364 kB' 'KernelStack: 7272 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230260 kB' 'Slab: 568980 kB' 'SReclaimable: 230260 kB' 'SUnreclaim: 338720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.671 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.671 00:36:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # continue 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.672 00:36:43 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.672 00:36:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.672 00:36:43 -- setup/common.sh@33 -- # echo 0 00:02:50.672 00:36:43 -- setup/common.sh@33 -- # return 0 00:02:50.672 00:36:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.672 00:36:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.672 00:36:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.672 00:36:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.672 00:36:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:50.672 node0=512 expecting 512 00:02:50.672 00:36:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.672 00:36:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.672 00:36:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.672 00:36:43 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:50.672 node1=512 expecting 512 00:02:50.672 00:36:43 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:50.672 00:02:50.672 real 0m2.880s 00:02:50.672 user 0m1.179s 00:02:50.672 sys 0m1.730s 00:02:50.672 00:36:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:50.672 00:36:43 -- common/autotest_common.sh@10 -- # set +x 00:02:50.672 ************************************ 00:02:50.672 END TEST per_node_1G_alloc 00:02:50.672 ************************************ 00:02:50.672 00:36:43 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:50.672 00:36:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:50.672 00:36:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:50.672 00:36:43 -- common/autotest_common.sh@10 -- # set +x 00:02:50.672 ************************************ 00:02:50.672 START TEST even_2G_alloc 00:02:50.672 ************************************ 00:02:50.672 00:36:43 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:50.672 00:36:43 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:50.672 00:36:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:50.672 00:36:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.672 00:36:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.672 00:36:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:50.672 00:36:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.672 00:36:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.672 00:36:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.672 00:36:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.672 00:36:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.672 00:36:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.672 00:36:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.672 00:36:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.672 00:36:43 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:50.672 00:36:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.672 00:36:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:50.672 00:36:43 -- setup/hugepages.sh@83 -- # : 512 00:02:50.672 00:36:43 -- setup/hugepages.sh@84 -- # : 1 00:02:50.672 00:36:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.672 00:36:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:50.672 00:36:43 -- setup/hugepages.sh@83 -- # : 0 00:02:50.672 00:36:43 -- setup/hugepages.sh@84 -- # : 0 00:02:50.672 00:36:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.672 00:36:43 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:50.672 00:36:43 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:50.672 00:36:43 -- setup/hugepages.sh@153 -- # setup output 00:02:50.672 00:36:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.672 00:36:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.215 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.215 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:53.215 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:53.215 00:36:45 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:53.215 00:36:45 -- setup/hugepages.sh@89 -- # local node 00:02:53.215 00:36:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.215 00:36:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.215 00:36:45 -- setup/hugepages.sh@92 -- # local surp 00:02:53.215 00:36:45 -- setup/hugepages.sh@93 -- # local resv 00:02:53.215 00:36:45 -- setup/hugepages.sh@94 -- # local anon 00:02:53.215 00:36:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.215 00:36:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.215 00:36:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.215 00:36:45 -- setup/common.sh@18 -- # local node= 00:02:53.215 00:36:45 -- setup/common.sh@19 -- # local var val 00:02:53.215 00:36:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.215 00:36:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.215 00:36:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.215 00:36:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.215 00:36:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.215 00:36:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171051580 kB' 'MemAvailable: 174881160 kB' 'Buffers: 3888 kB' 'Cached: 14207856 kB' 'SwapCached: 0 kB' 'Active: 11190368 kB' 'Inactive: 3663216 kB' 'Active(anon): 10129596 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645104 kB' 'Mapped: 205640 kB' 'Shmem: 9487756 kB' 'KReclaimable: 499616 kB' 'Slab: 1131096 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 631480 kB' 'KernelStack: 20848 kB' 'PageTables: 10084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11639312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316308 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.215 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.215 00:36:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.216 00:36:45 -- setup/common.sh@33 -- # echo 0 00:02:53.216 00:36:45 -- setup/common.sh@33 -- # return 0 00:02:53.216 00:36:45 -- setup/hugepages.sh@97 -- # anon=0 00:02:53.216 00:36:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.216 00:36:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.216 00:36:45 -- setup/common.sh@18 -- # local node= 00:02:53.216 00:36:45 -- setup/common.sh@19 -- # local var val 00:02:53.216 00:36:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.216 00:36:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.216 00:36:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.216 00:36:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.216 00:36:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.216 00:36:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.216 00:36:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171056032 kB' 'MemAvailable: 174885612 kB' 'Buffers: 3888 kB' 'Cached: 14207856 kB' 'SwapCached: 0 kB' 'Active: 11190432 kB' 'Inactive: 3663216 kB' 'Active(anon): 10129660 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645248 kB' 'Mapped: 205580 kB' 'Shmem: 9487756 kB' 'KReclaimable: 499616 kB' 'Slab: 1131064 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 631448 kB' 'KernelStack: 20832 kB' 'PageTables: 10780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11639324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316340 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.216 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.216 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.217 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.217 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.218 00:36:45 -- setup/common.sh@33 -- # echo 0 00:02:53.218 00:36:45 -- setup/common.sh@33 -- # return 0 00:02:53.218 00:36:45 -- setup/hugepages.sh@99 -- # surp=0 00:02:53.218 00:36:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.218 00:36:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.218 00:36:45 -- setup/common.sh@18 -- # local node= 00:02:53.218 00:36:45 -- setup/common.sh@19 -- # local var val 00:02:53.218 00:36:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.218 00:36:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.218 00:36:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.218 00:36:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.218 00:36:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.218 00:36:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.218 00:36:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171052032 kB' 'MemAvailable: 174881612 kB' 'Buffers: 3888 kB' 'Cached: 14207868 kB' 'SwapCached: 0 kB' 'Active: 11191340 kB' 'Inactive: 3663216 kB' 'Active(anon): 10130568 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646112 kB' 'Mapped: 205580 kB' 'Shmem: 9487768 kB' 'KReclaimable: 499616 kB' 'Slab: 1131032 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 631416 kB' 'KernelStack: 21216 kB' 'PageTables: 11400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11639184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316324 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.218 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.218 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.219 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.219 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.220 00:36:45 -- setup/common.sh@33 -- # echo 0 00:02:53.220 00:36:45 -- setup/common.sh@33 -- # return 0 00:02:53.220 00:36:45 -- setup/hugepages.sh@100 -- # resv=0 00:02:53.220 00:36:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:53.220 nr_hugepages=1024 00:02:53.220 00:36:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.220 resv_hugepages=0 00:02:53.220 00:36:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.220 surplus_hugepages=0 00:02:53.220 00:36:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.220 anon_hugepages=0 00:02:53.220 00:36:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.220 00:36:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:53.220 00:36:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.220 00:36:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.220 00:36:45 -- setup/common.sh@18 -- # local node= 00:02:53.220 00:36:45 -- setup/common.sh@19 -- # local var val 00:02:53.220 00:36:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.220 00:36:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.220 00:36:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.220 00:36:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.220 00:36:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.220 00:36:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171057832 kB' 'MemAvailable: 174887412 kB' 'Buffers: 3888 kB' 'Cached: 14207884 kB' 'SwapCached: 0 kB' 'Active: 11189868 kB' 'Inactive: 3663216 kB' 'Active(anon): 10129096 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644128 kB' 'Mapped: 205552 kB' 'Shmem: 9487784 kB' 'KReclaimable: 499616 kB' 'Slab: 1131280 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 631664 kB' 'KernelStack: 20592 kB' 'PageTables: 9692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11636720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316164 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.220 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.220 00:36:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.221 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.221 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.222 00:36:45 -- setup/common.sh@33 -- # echo 1024 00:02:53.222 00:36:45 -- setup/common.sh@33 -- # return 0 00:02:53.222 00:36:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.222 00:36:45 -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.222 00:36:45 -- setup/hugepages.sh@27 -- # local node 00:02:53.222 00:36:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.222 00:36:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.222 00:36:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.222 00:36:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.222 00:36:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.222 00:36:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.222 00:36:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.222 00:36:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.222 00:36:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.222 00:36:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.222 00:36:45 -- setup/common.sh@18 -- # local node=0 00:02:53.222 00:36:45 -- setup/common.sh@19 -- # local var val 00:02:53.222 00:36:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.222 00:36:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.222 00:36:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.222 00:36:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.222 00:36:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.222 00:36:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92021516 kB' 'MemUsed: 5594112 kB' 'SwapCached: 0 kB' 'Active: 3194740 kB' 'Inactive: 135680 kB' 'Active(anon): 2752148 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 135680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2836620 kB' 'Mapped: 89388 kB' 'AnonPages: 497056 kB' 'Shmem: 2258348 kB' 'KernelStack: 13368 kB' 'PageTables: 6120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 269356 kB' 'Slab: 562272 kB' 'SReclaimable: 269356 kB' 'SUnreclaim: 292916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.222 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.222 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.223 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.223 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.223 00:36:45 -- setup/common.sh@33 -- # echo 0 00:02:53.223 00:36:45 -- setup/common.sh@33 -- # return 0 00:02:53.223 00:36:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.224 00:36:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.224 00:36:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.224 00:36:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:53.224 00:36:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.224 00:36:45 -- setup/common.sh@18 -- # local node=1 00:02:53.224 00:36:45 -- setup/common.sh@19 -- # local var val 00:02:53.224 00:36:45 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.224 00:36:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.224 00:36:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:53.224 00:36:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:53.224 00:36:45 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.224 00:36:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.224 00:36:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 79038876 kB' 'MemUsed: 14726676 kB' 'SwapCached: 0 kB' 'Active: 7994636 kB' 'Inactive: 3527536 kB' 'Active(anon): 7376456 kB' 'Inactive(anon): 0 kB' 'Active(file): 618180 kB' 'Inactive(file): 3527536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11375180 kB' 'Mapped: 116164 kB' 'AnonPages: 147088 kB' 'Shmem: 7229464 kB' 'KernelStack: 7192 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230260 kB' 'Slab: 568988 kB' 'SReclaimable: 230260 kB' 'SUnreclaim: 338728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.224 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.224 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # continue 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.225 00:36:45 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.225 00:36:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.225 00:36:45 -- setup/common.sh@33 -- # echo 0 00:02:53.225 00:36:45 -- setup/common.sh@33 -- # return 0 00:02:53.225 00:36:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.225 00:36:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.225 00:36:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.225 00:36:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.225 00:36:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:53.225 node0=512 expecting 512 00:02:53.225 00:36:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.225 00:36:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.225 00:36:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.225 00:36:45 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:53.225 node1=512 expecting 512 00:02:53.225 00:36:45 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:53.225 00:02:53.225 real 0m2.405s 00:02:53.225 user 0m0.872s 00:02:53.225 sys 0m1.370s 00:02:53.225 00:36:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:53.225 00:36:45 -- common/autotest_common.sh@10 -- # set +x 00:02:53.225 ************************************ 00:02:53.225 END TEST even_2G_alloc 00:02:53.225 ************************************ 00:02:53.225 00:36:45 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:53.225 00:36:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:53.225 00:36:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:53.225 00:36:45 -- common/autotest_common.sh@10 -- # set +x 00:02:53.225 ************************************ 00:02:53.225 START TEST odd_alloc 00:02:53.225 ************************************ 00:02:53.225 00:36:45 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:53.225 00:36:45 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:53.225 00:36:45 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:53.225 00:36:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:53.225 00:36:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.225 00:36:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:53.225 00:36:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:53.225 00:36:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:53.225 00:36:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.225 00:36:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:53.225 00:36:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.225 00:36:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.225 00:36:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.225 00:36:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:53.225 00:36:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:53.225 00:36:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.225 00:36:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:53.225 00:36:45 -- setup/hugepages.sh@83 -- # : 513 00:02:53.225 00:36:45 -- setup/hugepages.sh@84 -- # : 1 00:02:53.225 00:36:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.225 00:36:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:53.225 00:36:45 -- setup/hugepages.sh@83 -- # : 0 00:02:53.225 00:36:45 -- setup/hugepages.sh@84 -- # : 0 00:02:53.225 00:36:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.225 00:36:45 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:53.225 00:36:45 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:53.225 00:36:45 -- setup/hugepages.sh@160 -- # setup output 00:02:53.225 00:36:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.225 00:36:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:55.762 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:55.762 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:55.762 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:55.762 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:55.762 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.024 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.024 00:36:48 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:56.024 00:36:48 -- setup/hugepages.sh@89 -- # local node 00:02:56.024 00:36:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.024 00:36:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.024 00:36:48 -- setup/hugepages.sh@92 -- # local surp 00:02:56.024 00:36:48 -- setup/hugepages.sh@93 -- # local resv 00:02:56.024 00:36:48 -- setup/hugepages.sh@94 -- # local anon 00:02:56.024 00:36:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.024 00:36:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.024 00:36:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.024 00:36:48 -- setup/common.sh@18 -- # local node= 00:02:56.024 00:36:48 -- setup/common.sh@19 -- # local var val 00:02:56.024 00:36:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.024 00:36:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.024 00:36:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.024 00:36:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.024 00:36:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.024 00:36:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.024 00:36:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171031184 kB' 'MemAvailable: 174860764 kB' 'Buffers: 3888 kB' 'Cached: 14207976 kB' 'SwapCached: 0 kB' 'Active: 11190952 kB' 'Inactive: 3663216 kB' 'Active(anon): 10130180 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645656 kB' 'Mapped: 205620 kB' 'Shmem: 9487876 kB' 'KReclaimable: 499616 kB' 'Slab: 1130716 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 631100 kB' 'KernelStack: 20640 kB' 'PageTables: 9824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11637316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316212 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.024 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.024 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.025 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.025 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.026 00:36:48 -- setup/common.sh@33 -- # echo 0 00:02:56.026 00:36:48 -- setup/common.sh@33 -- # return 0 00:02:56.026 00:36:48 -- setup/hugepages.sh@97 -- # anon=0 00:02:56.026 00:36:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.026 00:36:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.026 00:36:48 -- setup/common.sh@18 -- # local node= 00:02:56.026 00:36:48 -- setup/common.sh@19 -- # local var val 00:02:56.026 00:36:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.026 00:36:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.026 00:36:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.026 00:36:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.026 00:36:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.026 00:36:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171030952 kB' 'MemAvailable: 174860532 kB' 'Buffers: 3888 kB' 'Cached: 14207980 kB' 'SwapCached: 0 kB' 'Active: 11191008 kB' 'Inactive: 3663216 kB' 'Active(anon): 10130236 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645696 kB' 'Mapped: 205584 kB' 'Shmem: 9487880 kB' 'KReclaimable: 499616 kB' 'Slab: 1130768 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 631152 kB' 'KernelStack: 20688 kB' 'PageTables: 9960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11637328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316180 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.026 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.026 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.027 00:36:48 -- setup/common.sh@33 -- # echo 0 00:02:56.027 00:36:48 -- setup/common.sh@33 -- # return 0 00:02:56.027 00:36:48 -- setup/hugepages.sh@99 -- # surp=0 00:02:56.027 00:36:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.027 00:36:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.027 00:36:48 -- setup/common.sh@18 -- # local node= 00:02:56.027 00:36:48 -- setup/common.sh@19 -- # local var val 00:02:56.027 00:36:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.027 00:36:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.027 00:36:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.027 00:36:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.027 00:36:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.027 00:36:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171031620 kB' 'MemAvailable: 174861200 kB' 'Buffers: 3888 kB' 'Cached: 14207992 kB' 'SwapCached: 0 kB' 'Active: 11191016 kB' 'Inactive: 3663216 kB' 'Active(anon): 10130244 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645696 kB' 'Mapped: 205584 kB' 'Shmem: 9487892 kB' 'KReclaimable: 499616 kB' 'Slab: 1130768 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 631152 kB' 'KernelStack: 20688 kB' 'PageTables: 9960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11637344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316180 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.027 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.027 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.028 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.028 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.029 00:36:48 -- setup/common.sh@33 -- # echo 0 00:02:56.029 00:36:48 -- setup/common.sh@33 -- # return 0 00:02:56.029 00:36:48 -- setup/hugepages.sh@100 -- # resv=0 00:02:56.029 00:36:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:56.029 nr_hugepages=1025 00:02:56.029 00:36:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.029 resv_hugepages=0 00:02:56.029 00:36:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.029 surplus_hugepages=0 00:02:56.029 00:36:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.029 anon_hugepages=0 00:02:56.029 00:36:48 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:56.029 00:36:48 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:56.029 00:36:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.029 00:36:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.029 00:36:48 -- setup/common.sh@18 -- # local node= 00:02:56.029 00:36:48 -- setup/common.sh@19 -- # local var val 00:02:56.029 00:36:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.029 00:36:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.029 00:36:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.029 00:36:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.029 00:36:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.029 00:36:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.029 00:36:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171032044 kB' 'MemAvailable: 174861624 kB' 'Buffers: 3888 kB' 'Cached: 14208016 kB' 'SwapCached: 0 kB' 'Active: 11190708 kB' 'Inactive: 3663216 kB' 'Active(anon): 10129936 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645288 kB' 'Mapped: 205584 kB' 'Shmem: 9487916 kB' 'KReclaimable: 499616 kB' 'Slab: 1130768 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 631152 kB' 'KernelStack: 20672 kB' 'PageTables: 9908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029592 kB' 'Committed_AS: 11637356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316180 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.029 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.029 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.291 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.291 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.292 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.292 00:36:48 -- setup/common.sh@33 -- # echo 1025 00:02:56.292 00:36:48 -- setup/common.sh@33 -- # return 0 00:02:56.292 00:36:48 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:56.292 00:36:48 -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.292 00:36:48 -- setup/hugepages.sh@27 -- # local node 00:02:56.292 00:36:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.292 00:36:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.292 00:36:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.292 00:36:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:56.292 00:36:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.292 00:36:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.292 00:36:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.292 00:36:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.292 00:36:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.292 00:36:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.292 00:36:48 -- setup/common.sh@18 -- # local node=0 00:02:56.292 00:36:48 -- setup/common.sh@19 -- # local var val 00:02:56.292 00:36:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.292 00:36:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.292 00:36:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.292 00:36:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.292 00:36:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.292 00:36:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.292 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92010712 kB' 'MemUsed: 5604916 kB' 'SwapCached: 0 kB' 'Active: 3196000 kB' 'Inactive: 135680 kB' 'Active(anon): 2753408 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 135680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2836680 kB' 'Mapped: 89420 kB' 'AnonPages: 498192 kB' 'Shmem: 2258408 kB' 'KernelStack: 13368 kB' 'PageTables: 6116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 269356 kB' 'Slab: 562080 kB' 'SReclaimable: 269356 kB' 'SUnreclaim: 292724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.293 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.293 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@33 -- # echo 0 00:02:56.294 00:36:48 -- setup/common.sh@33 -- # return 0 00:02:56.294 00:36:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.294 00:36:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.294 00:36:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.294 00:36:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:56.294 00:36:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.294 00:36:48 -- setup/common.sh@18 -- # local node=1 00:02:56.294 00:36:48 -- setup/common.sh@19 -- # local var val 00:02:56.294 00:36:48 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.294 00:36:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.294 00:36:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:56.294 00:36:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:56.294 00:36:48 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.294 00:36:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 79021748 kB' 'MemUsed: 14743804 kB' 'SwapCached: 0 kB' 'Active: 7994976 kB' 'Inactive: 3527536 kB' 'Active(anon): 7376796 kB' 'Inactive(anon): 0 kB' 'Active(file): 618180 kB' 'Inactive(file): 3527536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11375228 kB' 'Mapped: 116164 kB' 'AnonPages: 147352 kB' 'Shmem: 7229512 kB' 'KernelStack: 7304 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230260 kB' 'Slab: 568688 kB' 'SReclaimable: 230260 kB' 'SUnreclaim: 338428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.294 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.294 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # continue 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.295 00:36:48 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.295 00:36:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.295 00:36:48 -- setup/common.sh@33 -- # echo 0 00:02:56.295 00:36:48 -- setup/common.sh@33 -- # return 0 00:02:56.295 00:36:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.295 00:36:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.295 00:36:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.295 00:36:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:56.295 node0=512 expecting 513 00:02:56.295 00:36:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.295 00:36:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.295 00:36:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.295 00:36:48 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:56.295 node1=513 expecting 512 00:02:56.295 00:36:48 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:56.295 00:02:56.295 real 0m2.933s 00:02:56.295 user 0m1.155s 00:02:56.295 sys 0m1.827s 00:02:56.295 00:36:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:56.295 00:36:48 -- common/autotest_common.sh@10 -- # set +x 00:02:56.295 ************************************ 00:02:56.295 END TEST odd_alloc 00:02:56.295 ************************************ 00:02:56.295 00:36:48 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:56.295 00:36:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:56.295 00:36:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:56.295 00:36:48 -- common/autotest_common.sh@10 -- # set +x 00:02:56.295 ************************************ 00:02:56.295 START TEST custom_alloc 00:02:56.295 ************************************ 00:02:56.295 00:36:48 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:56.295 00:36:48 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:56.295 00:36:48 -- setup/hugepages.sh@169 -- # local node 00:02:56.295 00:36:48 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:56.295 00:36:48 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:56.295 00:36:48 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:56.295 00:36:48 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:56.295 00:36:48 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:56.295 00:36:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:56.295 00:36:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:56.295 00:36:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:56.295 00:36:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.295 00:36:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:56.295 00:36:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.295 00:36:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.295 00:36:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.295 00:36:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:56.295 00:36:48 -- setup/hugepages.sh@83 -- # : 256 00:02:56.295 00:36:48 -- setup/hugepages.sh@84 -- # : 1 00:02:56.295 00:36:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:56.295 00:36:48 -- setup/hugepages.sh@83 -- # : 0 00:02:56.295 00:36:48 -- setup/hugepages.sh@84 -- # : 0 00:02:56.295 00:36:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:56.295 00:36:48 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:56.295 00:36:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:56.295 00:36:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.295 00:36:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:56.295 00:36:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:56.295 00:36:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:56.295 00:36:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.295 00:36:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:56.296 00:36:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.296 00:36:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.296 00:36:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.296 00:36:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:56.296 00:36:48 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:56.296 00:36:48 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:56.296 00:36:48 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:56.296 00:36:48 -- setup/hugepages.sh@78 -- # return 0 00:02:56.296 00:36:48 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:56.296 00:36:48 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:56.296 00:36:48 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:56.296 00:36:48 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:56.296 00:36:48 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:56.296 00:36:48 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:56.296 00:36:48 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:56.296 00:36:48 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:56.296 00:36:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:56.296 00:36:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.296 00:36:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:56.296 00:36:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.296 00:36:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.296 00:36:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.296 00:36:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:56.296 00:36:48 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:56.296 00:36:48 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:56.296 00:36:48 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:56.296 00:36:48 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:56.296 00:36:48 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:56.296 00:36:48 -- setup/hugepages.sh@78 -- # return 0 00:02:56.296 00:36:48 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:56.296 00:36:48 -- setup/hugepages.sh@187 -- # setup output 00:02:56.296 00:36:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.296 00:36:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:58.835 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.835 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:58.835 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.835 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.835 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.835 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.835 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.836 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:58.836 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:58.836 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:58.836 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:58.836 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:58.836 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:58.836 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:58.836 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:58.836 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.099 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.099 00:36:51 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:59.099 00:36:51 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:59.099 00:36:51 -- setup/hugepages.sh@89 -- # local node 00:02:59.099 00:36:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.099 00:36:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.099 00:36:51 -- setup/hugepages.sh@92 -- # local surp 00:02:59.099 00:36:51 -- setup/hugepages.sh@93 -- # local resv 00:02:59.099 00:36:51 -- setup/hugepages.sh@94 -- # local anon 00:02:59.099 00:36:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.099 00:36:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.099 00:36:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.099 00:36:51 -- setup/common.sh@18 -- # local node= 00:02:59.099 00:36:51 -- setup/common.sh@19 -- # local var val 00:02:59.099 00:36:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.099 00:36:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.099 00:36:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.099 00:36:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.099 00:36:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.099 00:36:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 169977504 kB' 'MemAvailable: 173807084 kB' 'Buffers: 3888 kB' 'Cached: 14208100 kB' 'SwapCached: 0 kB' 'Active: 11190548 kB' 'Inactive: 3663216 kB' 'Active(anon): 10129776 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645060 kB' 'Mapped: 205676 kB' 'Shmem: 9488000 kB' 'KReclaimable: 499616 kB' 'Slab: 1130992 kB' 'SReclaimable: 499616 kB' 'SUnreclaim: 631376 kB' 'KernelStack: 20608 kB' 'PageTables: 9752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11637824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316164 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.099 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.099 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.100 00:36:51 -- setup/common.sh@33 -- # echo 0 00:02:59.100 00:36:51 -- setup/common.sh@33 -- # return 0 00:02:59.100 00:36:51 -- setup/hugepages.sh@97 -- # anon=0 00:02:59.100 00:36:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.100 00:36:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.100 00:36:51 -- setup/common.sh@18 -- # local node= 00:02:59.100 00:36:51 -- setup/common.sh@19 -- # local var val 00:02:59.100 00:36:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.100 00:36:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.100 00:36:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.100 00:36:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.100 00:36:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.100 00:36:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 169978916 kB' 'MemAvailable: 173808480 kB' 'Buffers: 3888 kB' 'Cached: 14208104 kB' 'SwapCached: 0 kB' 'Active: 11190208 kB' 'Inactive: 3663216 kB' 'Active(anon): 10129436 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644776 kB' 'Mapped: 205612 kB' 'Shmem: 9488004 kB' 'KReclaimable: 499584 kB' 'Slab: 1130936 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631352 kB' 'KernelStack: 20592 kB' 'PageTables: 9708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11637836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316132 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.100 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.100 00:36:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.101 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.101 00:36:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.102 00:36:51 -- setup/common.sh@33 -- # echo 0 00:02:59.102 00:36:51 -- setup/common.sh@33 -- # return 0 00:02:59.102 00:36:51 -- setup/hugepages.sh@99 -- # surp=0 00:02:59.102 00:36:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.102 00:36:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.102 00:36:51 -- setup/common.sh@18 -- # local node= 00:02:59.102 00:36:51 -- setup/common.sh@19 -- # local var val 00:02:59.102 00:36:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.102 00:36:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.102 00:36:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.102 00:36:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.102 00:36:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.102 00:36:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 169978916 kB' 'MemAvailable: 173808480 kB' 'Buffers: 3888 kB' 'Cached: 14208112 kB' 'SwapCached: 0 kB' 'Active: 11190556 kB' 'Inactive: 3663216 kB' 'Active(anon): 10129784 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645124 kB' 'Mapped: 205612 kB' 'Shmem: 9488012 kB' 'KReclaimable: 499584 kB' 'Slab: 1130936 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631352 kB' 'KernelStack: 20608 kB' 'PageTables: 9756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11637852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316132 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.102 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.102 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.103 00:36:51 -- setup/common.sh@33 -- # echo 0 00:02:59.103 00:36:51 -- setup/common.sh@33 -- # return 0 00:02:59.103 00:36:51 -- setup/hugepages.sh@100 -- # resv=0 00:02:59.103 00:36:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:59.103 nr_hugepages=1536 00:02:59.103 00:36:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.103 resv_hugepages=0 00:02:59.103 00:36:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.103 surplus_hugepages=0 00:02:59.103 00:36:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.103 anon_hugepages=0 00:02:59.103 00:36:51 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:59.103 00:36:51 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:59.103 00:36:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.103 00:36:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.103 00:36:51 -- setup/common.sh@18 -- # local node= 00:02:59.103 00:36:51 -- setup/common.sh@19 -- # local var val 00:02:59.103 00:36:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.103 00:36:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.103 00:36:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.103 00:36:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.103 00:36:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.103 00:36:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 169979168 kB' 'MemAvailable: 173808732 kB' 'Buffers: 3888 kB' 'Cached: 14208128 kB' 'SwapCached: 0 kB' 'Active: 11190240 kB' 'Inactive: 3663216 kB' 'Active(anon): 10129468 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644772 kB' 'Mapped: 205612 kB' 'Shmem: 9488028 kB' 'KReclaimable: 499584 kB' 'Slab: 1130936 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631352 kB' 'KernelStack: 20592 kB' 'PageTables: 9708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506328 kB' 'Committed_AS: 11637864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316132 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.103 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.103 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.104 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.104 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.105 00:36:51 -- setup/common.sh@33 -- # echo 1536 00:02:59.105 00:36:51 -- setup/common.sh@33 -- # return 0 00:02:59.105 00:36:51 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:59.105 00:36:51 -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.105 00:36:51 -- setup/hugepages.sh@27 -- # local node 00:02:59.105 00:36:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.105 00:36:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.105 00:36:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.105 00:36:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:59.105 00:36:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.105 00:36:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.105 00:36:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.105 00:36:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.105 00:36:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.105 00:36:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.105 00:36:51 -- setup/common.sh@18 -- # local node=0 00:02:59.105 00:36:51 -- setup/common.sh@19 -- # local var val 00:02:59.105 00:36:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.105 00:36:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.105 00:36:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.105 00:36:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.105 00:36:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.105 00:36:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91999172 kB' 'MemUsed: 5616456 kB' 'SwapCached: 0 kB' 'Active: 3195100 kB' 'Inactive: 135680 kB' 'Active(anon): 2752508 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 135680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2836772 kB' 'Mapped: 89448 kB' 'AnonPages: 497180 kB' 'Shmem: 2258500 kB' 'KernelStack: 13368 kB' 'PageTables: 6116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 269324 kB' 'Slab: 562292 kB' 'SReclaimable: 269324 kB' 'SUnreclaim: 292968 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.105 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.105 00:36:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.367 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.367 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@33 -- # echo 0 00:02:59.368 00:36:51 -- setup/common.sh@33 -- # return 0 00:02:59.368 00:36:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.368 00:36:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.368 00:36:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.368 00:36:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:59.368 00:36:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.368 00:36:51 -- setup/common.sh@18 -- # local node=1 00:02:59.368 00:36:51 -- setup/common.sh@19 -- # local var val 00:02:59.368 00:36:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.368 00:36:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.368 00:36:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:59.368 00:36:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:59.368 00:36:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.368 00:36:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765552 kB' 'MemFree: 77979240 kB' 'MemUsed: 15786312 kB' 'SwapCached: 0 kB' 'Active: 7995020 kB' 'Inactive: 3527536 kB' 'Active(anon): 7376840 kB' 'Inactive(anon): 0 kB' 'Active(file): 618180 kB' 'Inactive(file): 3527536 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 11375260 kB' 'Mapped: 116164 kB' 'AnonPages: 147416 kB' 'Shmem: 7229544 kB' 'KernelStack: 7208 kB' 'PageTables: 3540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 230260 kB' 'Slab: 568644 kB' 'SReclaimable: 230260 kB' 'SUnreclaim: 338384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.368 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.368 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # continue 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.369 00:36:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.369 00:36:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.369 00:36:51 -- setup/common.sh@33 -- # echo 0 00:02:59.369 00:36:51 -- setup/common.sh@33 -- # return 0 00:02:59.369 00:36:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.369 00:36:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.369 00:36:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.369 00:36:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.369 00:36:51 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:59.369 node0=512 expecting 512 00:02:59.369 00:36:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.369 00:36:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.369 00:36:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.369 00:36:51 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:59.369 node1=1024 expecting 1024 00:02:59.369 00:36:51 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:59.369 00:02:59.369 real 0m2.904s 00:02:59.369 user 0m1.165s 00:02:59.369 sys 0m1.798s 00:02:59.369 00:36:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:59.369 00:36:51 -- common/autotest_common.sh@10 -- # set +x 00:02:59.369 ************************************ 00:02:59.369 END TEST custom_alloc 00:02:59.369 ************************************ 00:02:59.369 00:36:51 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:59.369 00:36:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:59.369 00:36:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:59.369 00:36:51 -- common/autotest_common.sh@10 -- # set +x 00:02:59.369 ************************************ 00:02:59.369 START TEST no_shrink_alloc 00:02:59.369 ************************************ 00:02:59.369 00:36:51 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:59.369 00:36:51 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:59.369 00:36:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:59.369 00:36:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:59.369 00:36:51 -- setup/hugepages.sh@51 -- # shift 00:02:59.369 00:36:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:59.369 00:36:51 -- setup/hugepages.sh@52 -- # local node_ids 00:02:59.369 00:36:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.369 00:36:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:59.369 00:36:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:59.369 00:36:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:59.369 00:36:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.369 00:36:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.369 00:36:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.369 00:36:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.369 00:36:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.369 00:36:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:59.369 00:36:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.369 00:36:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:59.369 00:36:51 -- setup/hugepages.sh@73 -- # return 0 00:02:59.369 00:36:51 -- setup/hugepages.sh@198 -- # setup output 00:02:59.369 00:36:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.369 00:36:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.671 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.671 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:02.671 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:02.671 00:36:54 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:02.671 00:36:54 -- setup/hugepages.sh@89 -- # local node 00:03:02.671 00:36:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.671 00:36:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.671 00:36:54 -- setup/hugepages.sh@92 -- # local surp 00:03:02.671 00:36:54 -- setup/hugepages.sh@93 -- # local resv 00:03:02.671 00:36:54 -- setup/hugepages.sh@94 -- # local anon 00:03:02.671 00:36:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.671 00:36:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.672 00:36:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.672 00:36:54 -- setup/common.sh@18 -- # local node= 00:03:02.672 00:36:54 -- setup/common.sh@19 -- # local var val 00:03:02.672 00:36:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.672 00:36:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.672 00:36:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.672 00:36:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.672 00:36:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.672 00:36:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171005688 kB' 'MemAvailable: 174835252 kB' 'Buffers: 3888 kB' 'Cached: 14208220 kB' 'SwapCached: 0 kB' 'Active: 11192304 kB' 'Inactive: 3663216 kB' 'Active(anon): 10131532 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646272 kB' 'Mapped: 206048 kB' 'Shmem: 9488120 kB' 'KReclaimable: 499584 kB' 'Slab: 1131232 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631648 kB' 'KernelStack: 20592 kB' 'PageTables: 10188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11640956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316148 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.672 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.672 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.673 00:36:54 -- setup/common.sh@33 -- # echo 0 00:03:02.673 00:36:54 -- setup/common.sh@33 -- # return 0 00:03:02.673 00:36:54 -- setup/hugepages.sh@97 -- # anon=0 00:03:02.673 00:36:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.673 00:36:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.673 00:36:54 -- setup/common.sh@18 -- # local node= 00:03:02.673 00:36:54 -- setup/common.sh@19 -- # local var val 00:03:02.673 00:36:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.673 00:36:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.673 00:36:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.673 00:36:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.673 00:36:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.673 00:36:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171007128 kB' 'MemAvailable: 174836692 kB' 'Buffers: 3888 kB' 'Cached: 14208224 kB' 'SwapCached: 0 kB' 'Active: 11191776 kB' 'Inactive: 3663216 kB' 'Active(anon): 10131004 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645772 kB' 'Mapped: 205744 kB' 'Shmem: 9488124 kB' 'KReclaimable: 499584 kB' 'Slab: 1131168 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631584 kB' 'KernelStack: 20688 kB' 'PageTables: 10560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11640968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316132 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.673 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.673 00:36:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.674 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.674 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.675 00:36:54 -- setup/common.sh@33 -- # echo 0 00:03:02.675 00:36:54 -- setup/common.sh@33 -- # return 0 00:03:02.675 00:36:54 -- setup/hugepages.sh@99 -- # surp=0 00:03:02.675 00:36:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.675 00:36:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.675 00:36:54 -- setup/common.sh@18 -- # local node= 00:03:02.675 00:36:54 -- setup/common.sh@19 -- # local var val 00:03:02.675 00:36:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.675 00:36:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.675 00:36:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.675 00:36:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.675 00:36:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.675 00:36:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171011076 kB' 'MemAvailable: 174840640 kB' 'Buffers: 3888 kB' 'Cached: 14208236 kB' 'SwapCached: 0 kB' 'Active: 11191396 kB' 'Inactive: 3663216 kB' 'Active(anon): 10130624 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645824 kB' 'Mapped: 205664 kB' 'Shmem: 9488136 kB' 'KReclaimable: 499584 kB' 'Slab: 1131200 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631616 kB' 'KernelStack: 20736 kB' 'PageTables: 10004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11639464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316276 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.675 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.675 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.676 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.676 00:36:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.677 00:36:54 -- setup/common.sh@33 -- # echo 0 00:03:02.677 00:36:54 -- setup/common.sh@33 -- # return 0 00:03:02.677 00:36:54 -- setup/hugepages.sh@100 -- # resv=0 00:03:02.677 00:36:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:02.677 nr_hugepages=1024 00:03:02.677 00:36:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.677 resv_hugepages=0 00:03:02.677 00:36:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.677 surplus_hugepages=0 00:03:02.677 00:36:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.677 anon_hugepages=0 00:03:02.677 00:36:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.677 00:36:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:02.677 00:36:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.677 00:36:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.677 00:36:54 -- setup/common.sh@18 -- # local node= 00:03:02.677 00:36:54 -- setup/common.sh@19 -- # local var val 00:03:02.677 00:36:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.677 00:36:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.677 00:36:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.677 00:36:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.677 00:36:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.677 00:36:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171009912 kB' 'MemAvailable: 174839476 kB' 'Buffers: 3888 kB' 'Cached: 14208248 kB' 'SwapCached: 0 kB' 'Active: 11191908 kB' 'Inactive: 3663216 kB' 'Active(anon): 10131136 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646324 kB' 'Mapped: 205664 kB' 'Shmem: 9488148 kB' 'KReclaimable: 499584 kB' 'Slab: 1131200 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631616 kB' 'KernelStack: 20928 kB' 'PageTables: 10388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11640996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316308 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.677 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.677 00:36:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.678 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.678 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.679 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.679 00:36:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.679 00:36:54 -- setup/common.sh@33 -- # echo 1024 00:03:02.679 00:36:54 -- setup/common.sh@33 -- # return 0 00:03:02.679 00:36:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.679 00:36:54 -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.679 00:36:54 -- setup/hugepages.sh@27 -- # local node 00:03:02.679 00:36:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.679 00:36:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:02.679 00:36:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.679 00:36:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:02.679 00:36:54 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.679 00:36:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.679 00:36:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.679 00:36:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.679 00:36:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.680 00:36:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.680 00:36:54 -- setup/common.sh@18 -- # local node=0 00:03:02.680 00:36:54 -- setup/common.sh@19 -- # local var val 00:03:02.680 00:36:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.680 00:36:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.680 00:36:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.680 00:36:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.680 00:36:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.680 00:36:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90947932 kB' 'MemUsed: 6667696 kB' 'SwapCached: 0 kB' 'Active: 3194836 kB' 'Inactive: 135680 kB' 'Active(anon): 2752244 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 135680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2836864 kB' 'Mapped: 89480 kB' 'AnonPages: 496852 kB' 'Shmem: 2258592 kB' 'KernelStack: 13400 kB' 'PageTables: 6116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 269324 kB' 'Slab: 562324 kB' 'SReclaimable: 269324 kB' 'SUnreclaim: 293000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:54 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.680 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.680 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # continue 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.681 00:36:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.681 00:36:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.681 00:36:55 -- setup/common.sh@33 -- # echo 0 00:03:02.681 00:36:55 -- setup/common.sh@33 -- # return 0 00:03:02.681 00:36:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.681 00:36:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.681 00:36:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.681 00:36:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.681 00:36:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:02.681 node0=1024 expecting 1024 00:03:02.681 00:36:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:02.681 00:36:55 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:02.681 00:36:55 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:02.681 00:36:55 -- setup/hugepages.sh@202 -- # setup output 00:03:02.681 00:36:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.681 00:36:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.226 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.226 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.226 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.226 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:05.226 00:36:57 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:05.226 00:36:57 -- setup/hugepages.sh@89 -- # local node 00:03:05.226 00:36:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.226 00:36:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.226 00:36:57 -- setup/hugepages.sh@92 -- # local surp 00:03:05.226 00:36:57 -- setup/hugepages.sh@93 -- # local resv 00:03:05.226 00:36:57 -- setup/hugepages.sh@94 -- # local anon 00:03:05.226 00:36:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.226 00:36:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.226 00:36:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.226 00:36:57 -- setup/common.sh@18 -- # local node= 00:03:05.226 00:36:57 -- setup/common.sh@19 -- # local var val 00:03:05.226 00:36:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.226 00:36:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.226 00:36:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.226 00:36:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.226 00:36:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.226 00:36:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171004304 kB' 'MemAvailable: 174833868 kB' 'Buffers: 3888 kB' 'Cached: 14208312 kB' 'SwapCached: 0 kB' 'Active: 11193004 kB' 'Inactive: 3663216 kB' 'Active(anon): 10132232 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647272 kB' 'Mapped: 205796 kB' 'Shmem: 9488212 kB' 'KReclaimable: 499584 kB' 'Slab: 1131340 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631756 kB' 'KernelStack: 20976 kB' 'PageTables: 10224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11641448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316388 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.226 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.226 00:36:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.227 00:36:57 -- setup/common.sh@33 -- # echo 0 00:03:05.227 00:36:57 -- setup/common.sh@33 -- # return 0 00:03:05.227 00:36:57 -- setup/hugepages.sh@97 -- # anon=0 00:03:05.227 00:36:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.227 00:36:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.227 00:36:57 -- setup/common.sh@18 -- # local node= 00:03:05.227 00:36:57 -- setup/common.sh@19 -- # local var val 00:03:05.227 00:36:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.227 00:36:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.227 00:36:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.227 00:36:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.227 00:36:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.227 00:36:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171005208 kB' 'MemAvailable: 174834772 kB' 'Buffers: 3888 kB' 'Cached: 14208312 kB' 'SwapCached: 0 kB' 'Active: 11192860 kB' 'Inactive: 3663216 kB' 'Active(anon): 10132088 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647092 kB' 'Mapped: 205748 kB' 'Shmem: 9488212 kB' 'KReclaimable: 499584 kB' 'Slab: 1131332 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631748 kB' 'KernelStack: 20688 kB' 'PageTables: 9968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11639940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316292 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.227 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.227 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.228 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.228 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.228 00:36:57 -- setup/common.sh@33 -- # echo 0 00:03:05.228 00:36:57 -- setup/common.sh@33 -- # return 0 00:03:05.228 00:36:57 -- setup/hugepages.sh@99 -- # surp=0 00:03:05.228 00:36:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.228 00:36:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.228 00:36:57 -- setup/common.sh@18 -- # local node= 00:03:05.228 00:36:57 -- setup/common.sh@19 -- # local var val 00:03:05.228 00:36:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.229 00:36:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.229 00:36:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.229 00:36:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.229 00:36:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.229 00:36:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171006980 kB' 'MemAvailable: 174836544 kB' 'Buffers: 3888 kB' 'Cached: 14208328 kB' 'SwapCached: 0 kB' 'Active: 11192364 kB' 'Inactive: 3663216 kB' 'Active(anon): 10131592 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646552 kB' 'Mapped: 205672 kB' 'Shmem: 9488228 kB' 'KReclaimable: 499584 kB' 'Slab: 1131240 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631656 kB' 'KernelStack: 20736 kB' 'PageTables: 9768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11641324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316308 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.229 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.229 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.230 00:36:57 -- setup/common.sh@33 -- # echo 0 00:03:05.230 00:36:57 -- setup/common.sh@33 -- # return 0 00:03:05.230 00:36:57 -- setup/hugepages.sh@100 -- # resv=0 00:03:05.230 00:36:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.230 nr_hugepages=1024 00:03:05.230 00:36:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.230 resv_hugepages=0 00:03:05.230 00:36:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.230 surplus_hugepages=0 00:03:05.230 00:36:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.230 anon_hugepages=0 00:03:05.230 00:36:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.230 00:36:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.230 00:36:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.230 00:36:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.230 00:36:57 -- setup/common.sh@18 -- # local node= 00:03:05.230 00:36:57 -- setup/common.sh@19 -- # local var val 00:03:05.230 00:36:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.230 00:36:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.230 00:36:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.230 00:36:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.230 00:36:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.230 00:36:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381180 kB' 'MemFree: 171006476 kB' 'MemAvailable: 174836040 kB' 'Buffers: 3888 kB' 'Cached: 14208328 kB' 'SwapCached: 0 kB' 'Active: 11192724 kB' 'Inactive: 3663216 kB' 'Active(anon): 10131952 kB' 'Inactive(anon): 0 kB' 'Active(file): 1060772 kB' 'Inactive(file): 3663216 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646924 kB' 'Mapped: 205672 kB' 'Shmem: 9488228 kB' 'KReclaimable: 499584 kB' 'Slab: 1131240 kB' 'SReclaimable: 499584 kB' 'SUnreclaim: 631656 kB' 'KernelStack: 20704 kB' 'PageTables: 9812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030616 kB' 'Committed_AS: 11639968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316324 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3412948 kB' 'DirectMap2M: 28772352 kB' 'DirectMap1G: 169869312 kB' 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.230 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.230 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.231 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.231 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.232 00:36:57 -- setup/common.sh@33 -- # echo 1024 00:03:05.232 00:36:57 -- setup/common.sh@33 -- # return 0 00:03:05.232 00:36:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.232 00:36:57 -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.232 00:36:57 -- setup/hugepages.sh@27 -- # local node 00:03:05.232 00:36:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.232 00:36:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:05.232 00:36:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.232 00:36:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:05.232 00:36:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.232 00:36:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.232 00:36:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.232 00:36:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.232 00:36:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.232 00:36:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.232 00:36:57 -- setup/common.sh@18 -- # local node=0 00:03:05.232 00:36:57 -- setup/common.sh@19 -- # local var val 00:03:05.232 00:36:57 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.232 00:36:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.232 00:36:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.232 00:36:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.232 00:36:57 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.232 00:36:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.232 00:36:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90949892 kB' 'MemUsed: 6665736 kB' 'SwapCached: 0 kB' 'Active: 3195024 kB' 'Inactive: 135680 kB' 'Active(anon): 2752432 kB' 'Inactive(anon): 0 kB' 'Active(file): 442592 kB' 'Inactive(file): 135680 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2836940 kB' 'Mapped: 89488 kB' 'AnonPages: 496896 kB' 'Shmem: 2258668 kB' 'KernelStack: 13352 kB' 'PageTables: 6000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 269324 kB' 'Slab: 562324 kB' 'SReclaimable: 269324 kB' 'SUnreclaim: 293000 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.232 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.232 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # continue 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.233 00:36:57 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.233 00:36:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.233 00:36:57 -- setup/common.sh@33 -- # echo 0 00:03:05.233 00:36:57 -- setup/common.sh@33 -- # return 0 00:03:05.233 00:36:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.233 00:36:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.233 00:36:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.233 00:36:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.233 00:36:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:05.233 node0=1024 expecting 1024 00:03:05.233 00:36:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:05.233 00:03:05.233 real 0m5.894s 00:03:05.233 user 0m2.362s 00:03:05.233 sys 0m3.656s 00:03:05.233 00:36:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:05.233 00:36:57 -- common/autotest_common.sh@10 -- # set +x 00:03:05.233 ************************************ 00:03:05.233 END TEST no_shrink_alloc 00:03:05.233 ************************************ 00:03:05.493 00:36:57 -- setup/hugepages.sh@217 -- # clear_hp 00:03:05.493 00:36:57 -- setup/hugepages.sh@37 -- # local node hp 00:03:05.493 00:36:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:05.493 00:36:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.493 00:36:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:05.493 00:36:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.493 00:36:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:05.493 00:36:57 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:05.493 00:36:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.493 00:36:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:05.493 00:36:57 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:05.493 00:36:57 -- setup/hugepages.sh@41 -- # echo 0 00:03:05.493 00:36:57 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:05.493 00:36:57 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:05.493 00:03:05.493 real 0m22.017s 00:03:05.493 user 0m8.414s 00:03:05.493 sys 0m12.880s 00:03:05.493 00:36:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:05.493 00:36:57 -- common/autotest_common.sh@10 -- # set +x 00:03:05.493 ************************************ 00:03:05.493 END TEST hugepages 00:03:05.493 ************************************ 00:03:05.493 00:36:57 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:05.493 00:36:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:05.493 00:36:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:05.493 00:36:57 -- common/autotest_common.sh@10 -- # set +x 00:03:05.493 ************************************ 00:03:05.493 START TEST driver 00:03:05.493 ************************************ 00:03:05.493 00:36:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:05.493 * Looking for test storage... 00:03:05.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:05.493 00:36:58 -- setup/driver.sh@68 -- # setup reset 00:03:05.493 00:36:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.493 00:36:58 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.685 00:37:02 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:09.685 00:37:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:09.685 00:37:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:09.685 00:37:02 -- common/autotest_common.sh@10 -- # set +x 00:03:09.685 ************************************ 00:03:09.685 START TEST guess_driver 00:03:09.685 ************************************ 00:03:09.685 00:37:02 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:09.685 00:37:02 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:09.685 00:37:02 -- setup/driver.sh@47 -- # local fail=0 00:03:09.685 00:37:02 -- setup/driver.sh@49 -- # pick_driver 00:03:09.685 00:37:02 -- setup/driver.sh@36 -- # vfio 00:03:09.685 00:37:02 -- setup/driver.sh@21 -- # local iommu_grups 00:03:09.685 00:37:02 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:09.685 00:37:02 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:09.685 00:37:02 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:09.685 00:37:02 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:09.685 00:37:02 -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:09.685 00:37:02 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:09.685 00:37:02 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:09.685 00:37:02 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:09.685 00:37:02 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:09.685 00:37:02 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:09.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:09.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:09.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:09.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:09.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:09.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:09.685 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:09.685 00:37:02 -- setup/driver.sh@30 -- # return 0 00:03:09.685 00:37:02 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:09.685 00:37:02 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:09.685 00:37:02 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:09.685 00:37:02 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:09.685 Looking for driver=vfio-pci 00:03:09.685 00:37:02 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:09.685 00:37:02 -- setup/driver.sh@45 -- # setup output config 00:03:09.685 00:37:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.685 00:37:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.216 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.216 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.216 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.216 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.216 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.216 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.216 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.216 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.216 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.216 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.216 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.216 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.216 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.216 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.216 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:05 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:05 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:05 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:05 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:05 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:05 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.475 00:37:05 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.475 00:37:05 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.475 00:37:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.412 00:37:05 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.412 00:37:05 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.412 00:37:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.412 00:37:05 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:13.412 00:37:05 -- setup/driver.sh@65 -- # setup reset 00:03:13.412 00:37:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.412 00:37:05 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.602 00:03:17.602 real 0m7.509s 00:03:17.602 user 0m2.114s 00:03:17.602 sys 0m3.805s 00:03:17.602 00:37:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:17.602 00:37:09 -- common/autotest_common.sh@10 -- # set +x 00:03:17.602 ************************************ 00:03:17.602 END TEST guess_driver 00:03:17.602 ************************************ 00:03:17.602 00:03:17.602 real 0m11.758s 00:03:17.602 user 0m3.380s 00:03:17.602 sys 0m6.034s 00:03:17.602 00:37:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:17.602 00:37:09 -- common/autotest_common.sh@10 -- # set +x 00:03:17.602 ************************************ 00:03:17.602 END TEST driver 00:03:17.602 ************************************ 00:03:17.602 00:37:09 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:17.602 00:37:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:17.602 00:37:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:17.602 00:37:09 -- common/autotest_common.sh@10 -- # set +x 00:03:17.602 ************************************ 00:03:17.602 START TEST devices 00:03:17.602 ************************************ 00:03:17.602 00:37:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:17.602 * Looking for test storage... 00:03:17.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:17.602 00:37:10 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:17.602 00:37:10 -- setup/devices.sh@192 -- # setup reset 00:03:17.602 00:37:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.602 00:37:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.891 00:37:13 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:20.891 00:37:13 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:20.891 00:37:13 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:20.891 00:37:13 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:20.891 00:37:13 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:20.891 00:37:13 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:20.891 00:37:13 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:20.891 00:37:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.891 00:37:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:20.891 00:37:13 -- setup/devices.sh@196 -- # blocks=() 00:03:20.891 00:37:13 -- setup/devices.sh@196 -- # declare -a blocks 00:03:20.891 00:37:13 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:20.891 00:37:13 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:20.891 00:37:13 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:20.891 00:37:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:20.891 00:37:13 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:20.891 00:37:13 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:20.891 00:37:13 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:20.891 00:37:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:20.891 00:37:13 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:20.891 00:37:13 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:20.891 00:37:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:20.891 No valid GPT data, bailing 00:03:20.891 00:37:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:20.891 00:37:13 -- scripts/common.sh@391 -- # pt= 00:03:20.891 00:37:13 -- scripts/common.sh@392 -- # return 1 00:03:20.891 00:37:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:20.891 00:37:13 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:20.891 00:37:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:20.891 00:37:13 -- setup/common.sh@80 -- # echo 1000204886016 00:03:20.891 00:37:13 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:20.891 00:37:13 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:20.891 00:37:13 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:20.891 00:37:13 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:20.891 00:37:13 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:20.891 00:37:13 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:20.891 00:37:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:20.892 00:37:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:20.892 00:37:13 -- common/autotest_common.sh@10 -- # set +x 00:03:20.892 ************************************ 00:03:20.892 START TEST nvme_mount 00:03:20.892 ************************************ 00:03:20.892 00:37:13 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:20.892 00:37:13 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:20.892 00:37:13 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:20.892 00:37:13 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.892 00:37:13 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.892 00:37:13 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:20.892 00:37:13 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:20.892 00:37:13 -- setup/common.sh@40 -- # local part_no=1 00:03:20.892 00:37:13 -- setup/common.sh@41 -- # local size=1073741824 00:03:20.892 00:37:13 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:20.892 00:37:13 -- setup/common.sh@44 -- # parts=() 00:03:20.892 00:37:13 -- setup/common.sh@44 -- # local parts 00:03:20.892 00:37:13 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:20.892 00:37:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.892 00:37:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:20.892 00:37:13 -- setup/common.sh@46 -- # (( part++ )) 00:03:20.892 00:37:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:20.892 00:37:13 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:20.892 00:37:13 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:20.892 00:37:13 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:21.829 Creating new GPT entries in memory. 00:03:21.829 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:21.829 other utilities. 00:03:21.829 00:37:14 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:21.829 00:37:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:21.829 00:37:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:21.829 00:37:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:21.829 00:37:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:22.768 Creating new GPT entries in memory. 00:03:22.768 The operation has completed successfully. 00:03:22.768 00:37:15 -- setup/common.sh@57 -- # (( part++ )) 00:03:22.768 00:37:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.768 00:37:15 -- setup/common.sh@62 -- # wait 1489401 00:03:22.768 00:37:15 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.768 00:37:15 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:22.768 00:37:15 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.768 00:37:15 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:22.768 00:37:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:22.768 00:37:15 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.028 00:37:15 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.028 00:37:15 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:23.028 00:37:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:23.028 00:37:15 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.028 00:37:15 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.028 00:37:15 -- setup/devices.sh@53 -- # local found=0 00:03:23.028 00:37:15 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:23.028 00:37:15 -- setup/devices.sh@56 -- # : 00:03:23.028 00:37:15 -- setup/devices.sh@59 -- # local pci status 00:03:23.028 00:37:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.028 00:37:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:23.028 00:37:15 -- setup/devices.sh@47 -- # setup output config 00:03:23.028 00:37:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.028 00:37:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:25.565 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.565 00:37:18 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:25.565 00:37:18 -- setup/devices.sh@63 -- # found=1 00:03:25.565 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.565 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.565 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.565 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.565 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.565 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.565 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.565 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:25.566 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.566 00:37:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:25.566 00:37:18 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:25.566 00:37:18 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.566 00:37:18 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:25.566 00:37:18 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:25.566 00:37:18 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:25.566 00:37:18 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.887 00:37:18 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.887 00:37:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:25.887 00:37:18 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:25.887 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:25.888 00:37:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:25.888 00:37:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:25.888 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:25.888 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:25.888 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:25.888 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:25.888 00:37:18 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:25.888 00:37:18 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:25.888 00:37:18 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.888 00:37:18 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:25.888 00:37:18 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:26.189 00:37:18 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.189 00:37:18 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.189 00:37:18 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:26.189 00:37:18 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:26.189 00:37:18 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.189 00:37:18 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.189 00:37:18 -- setup/devices.sh@53 -- # local found=0 00:03:26.189 00:37:18 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:26.189 00:37:18 -- setup/devices.sh@56 -- # : 00:03:26.189 00:37:18 -- setup/devices.sh@59 -- # local pci status 00:03:26.189 00:37:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.189 00:37:18 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:26.189 00:37:18 -- setup/devices.sh@47 -- # setup output config 00:03:26.189 00:37:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.189 00:37:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:28.725 00:37:21 -- setup/devices.sh@63 -- # found=1 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:28.725 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.725 00:37:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:28.725 00:37:21 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:28.725 00:37:21 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.725 00:37:21 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:28.725 00:37:21 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.725 00:37:21 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.725 00:37:21 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:28.726 00:37:21 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:28.726 00:37:21 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:28.726 00:37:21 -- setup/devices.sh@50 -- # local mount_point= 00:03:28.726 00:37:21 -- setup/devices.sh@51 -- # local test_file= 00:03:28.726 00:37:21 -- setup/devices.sh@53 -- # local found=0 00:03:28.726 00:37:21 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:28.726 00:37:21 -- setup/devices.sh@59 -- # local pci status 00:03:28.726 00:37:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.726 00:37:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:28.726 00:37:21 -- setup/devices.sh@47 -- # setup output config 00:03:28.726 00:37:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.726 00:37:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:32.014 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.014 00:37:24 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:32.014 00:37:24 -- setup/devices.sh@63 -- # found=1 00:03:32.014 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.015 00:37:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:32.015 00:37:24 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:32.015 00:37:24 -- setup/devices.sh@68 -- # return 0 00:03:32.015 00:37:24 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:32.015 00:37:24 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.015 00:37:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:32.015 00:37:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:32.015 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:32.015 00:03:32.015 real 0m10.849s 00:03:32.015 user 0m3.133s 00:03:32.015 sys 0m5.521s 00:03:32.015 00:37:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.015 00:37:24 -- common/autotest_common.sh@10 -- # set +x 00:03:32.015 ************************************ 00:03:32.015 END TEST nvme_mount 00:03:32.015 ************************************ 00:03:32.015 00:37:24 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:32.015 00:37:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.015 00:37:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.015 00:37:24 -- common/autotest_common.sh@10 -- # set +x 00:03:32.015 ************************************ 00:03:32.015 START TEST dm_mount 00:03:32.015 ************************************ 00:03:32.015 00:37:24 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:32.015 00:37:24 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:32.015 00:37:24 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:32.015 00:37:24 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:32.015 00:37:24 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:32.015 00:37:24 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:32.015 00:37:24 -- setup/common.sh@40 -- # local part_no=2 00:03:32.015 00:37:24 -- setup/common.sh@41 -- # local size=1073741824 00:03:32.015 00:37:24 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:32.015 00:37:24 -- setup/common.sh@44 -- # parts=() 00:03:32.015 00:37:24 -- setup/common.sh@44 -- # local parts 00:03:32.015 00:37:24 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:32.015 00:37:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.015 00:37:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.015 00:37:24 -- setup/common.sh@46 -- # (( part++ )) 00:03:32.015 00:37:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.015 00:37:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.015 00:37:24 -- setup/common.sh@46 -- # (( part++ )) 00:03:32.015 00:37:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.015 00:37:24 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:32.015 00:37:24 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:32.015 00:37:24 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:32.953 Creating new GPT entries in memory. 00:03:32.953 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:32.953 other utilities. 00:03:32.953 00:37:25 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:32.953 00:37:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.953 00:37:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.953 00:37:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.953 00:37:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:33.891 Creating new GPT entries in memory. 00:03:33.891 The operation has completed successfully. 00:03:33.891 00:37:26 -- setup/common.sh@57 -- # (( part++ )) 00:03:33.891 00:37:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.891 00:37:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.891 00:37:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.891 00:37:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:34.827 The operation has completed successfully. 00:03:34.827 00:37:27 -- setup/common.sh@57 -- # (( part++ )) 00:03:34.827 00:37:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.827 00:37:27 -- setup/common.sh@62 -- # wait 1493602 00:03:34.827 00:37:27 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:34.827 00:37:27 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.828 00:37:27 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.828 00:37:27 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:34.828 00:37:27 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:34.828 00:37:27 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.828 00:37:27 -- setup/devices.sh@161 -- # break 00:03:34.828 00:37:27 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.828 00:37:27 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:34.828 00:37:27 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:34.828 00:37:27 -- setup/devices.sh@166 -- # dm=dm-2 00:03:34.828 00:37:27 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:34.828 00:37:27 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:34.828 00:37:27 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.828 00:37:27 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:34.828 00:37:27 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.828 00:37:27 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.828 00:37:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:35.087 00:37:27 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.087 00:37:27 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:35.087 00:37:27 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:35.087 00:37:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:35.087 00:37:27 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.087 00:37:27 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:35.087 00:37:27 -- setup/devices.sh@53 -- # local found=0 00:03:35.087 00:37:27 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:35.087 00:37:27 -- setup/devices.sh@56 -- # : 00:03:35.087 00:37:27 -- setup/devices.sh@59 -- # local pci status 00:03:35.087 00:37:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.087 00:37:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:35.087 00:37:27 -- setup/devices.sh@47 -- # setup output config 00:03:35.087 00:37:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.087 00:37:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:37.623 00:37:30 -- setup/devices.sh@63 -- # found=1 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.623 00:37:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.623 00:37:30 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:37.623 00:37:30 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.623 00:37:30 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:37.623 00:37:30 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:37.623 00:37:30 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.623 00:37:30 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:37.623 00:37:30 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:37.623 00:37:30 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:37.623 00:37:30 -- setup/devices.sh@50 -- # local mount_point= 00:03:37.623 00:37:30 -- setup/devices.sh@51 -- # local test_file= 00:03:37.623 00:37:30 -- setup/devices.sh@53 -- # local found=0 00:03:37.623 00:37:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:37.623 00:37:30 -- setup/devices.sh@59 -- # local pci status 00:03:37.623 00:37:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.624 00:37:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:37.624 00:37:30 -- setup/devices.sh@47 -- # setup output config 00:03:37.624 00:37:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.624 00:37:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.152 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.152 00:37:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:40.153 00:37:32 -- setup/devices.sh@63 -- # found=1 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.153 00:37:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.153 00:37:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.411 00:37:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.411 00:37:32 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:40.411 00:37:32 -- setup/devices.sh@68 -- # return 0 00:03:40.411 00:37:32 -- setup/devices.sh@187 -- # cleanup_dm 00:03:40.411 00:37:32 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:40.411 00:37:32 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:40.411 00:37:32 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:40.411 00:37:32 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.412 00:37:32 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:40.412 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:40.412 00:37:32 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:40.412 00:37:32 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:40.412 00:03:40.412 real 0m8.605s 00:03:40.412 user 0m1.972s 00:03:40.412 sys 0m3.609s 00:03:40.412 00:37:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:40.412 00:37:32 -- common/autotest_common.sh@10 -- # set +x 00:03:40.412 ************************************ 00:03:40.412 END TEST dm_mount 00:03:40.412 ************************************ 00:03:40.412 00:37:33 -- setup/devices.sh@1 -- # cleanup 00:03:40.412 00:37:33 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:40.412 00:37:33 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.412 00:37:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.412 00:37:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:40.412 00:37:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.412 00:37:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:40.678 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:40.678 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:40.678 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:40.678 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:40.678 00:37:33 -- setup/devices.sh@12 -- # cleanup_dm 00:03:40.678 00:37:33 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:40.678 00:37:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:40.678 00:37:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.679 00:37:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:40.679 00:37:33 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.679 00:37:33 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:40.679 00:03:40.679 real 0m23.287s 00:03:40.679 user 0m6.451s 00:03:40.679 sys 0m11.455s 00:03:40.679 00:37:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:40.679 00:37:33 -- common/autotest_common.sh@10 -- # set +x 00:03:40.679 ************************************ 00:03:40.679 END TEST devices 00:03:40.679 ************************************ 00:03:40.679 00:03:40.679 real 1m16.932s 00:03:40.679 user 0m24.877s 00:03:40.679 sys 0m41.982s 00:03:40.679 00:37:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:40.679 00:37:33 -- common/autotest_common.sh@10 -- # set +x 00:03:40.679 ************************************ 00:03:40.679 END TEST setup.sh 00:03:40.679 ************************************ 00:03:40.679 00:37:33 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:43.208 Hugepages 00:03:43.208 node hugesize free / total 00:03:43.208 node0 1048576kB 0 / 0 00:03:43.208 node0 2048kB 2048 / 2048 00:03:43.208 node1 1048576kB 0 / 0 00:03:43.208 node1 2048kB 0 / 0 00:03:43.208 00:03:43.208 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.208 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:43.467 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:43.467 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:43.467 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:43.467 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:43.467 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:43.467 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:43.467 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:43.467 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:43.467 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:43.467 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:43.467 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:43.467 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:43.467 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:43.467 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:43.467 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:43.467 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:43.467 00:37:36 -- spdk/autotest.sh@130 -- # uname -s 00:03:43.467 00:37:36 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:43.467 00:37:36 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:43.467 00:37:36 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.994 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.994 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:46.928 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:46.928 00:37:39 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:47.863 00:37:40 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:47.863 00:37:40 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:47.863 00:37:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:47.863 00:37:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:47.863 00:37:40 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:47.863 00:37:40 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:47.863 00:37:40 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.863 00:37:40 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:47.863 00:37:40 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:47.863 00:37:40 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:47.863 00:37:40 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 00:03:47.863 00:37:40 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.402 Waiting for block devices as requested 00:03:50.403 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:50.666 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:50.666 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:50.666 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:50.924 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:50.924 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:50.924 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:50.924 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:51.182 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:51.182 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:51.182 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:51.440 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:51.440 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:51.440 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:51.440 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:51.698 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:51.698 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:51.698 00:37:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:51.698 00:37:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:51.698 00:37:44 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:03:51.698 00:37:44 -- common/autotest_common.sh@1488 -- # grep 0000:5e:00.0/nvme/nvme 00:03:51.698 00:37:44 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:51.698 00:37:44 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:51.698 00:37:44 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:51.698 00:37:44 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:51.698 00:37:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:51.698 00:37:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:51.698 00:37:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:51.698 00:37:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:51.698 00:37:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:51.698 00:37:44 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:51.698 00:37:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:51.698 00:37:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:51.698 00:37:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:51.698 00:37:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:51.698 00:37:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:51.956 00:37:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:51.956 00:37:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:51.956 00:37:44 -- common/autotest_common.sh@1543 -- # continue 00:03:51.956 00:37:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:51.956 00:37:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:51.956 00:37:44 -- common/autotest_common.sh@10 -- # set +x 00:03:51.956 00:37:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:51.956 00:37:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:51.956 00:37:44 -- common/autotest_common.sh@10 -- # set +x 00:03:51.956 00:37:44 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.486 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:54.486 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:55.431 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:55.431 00:37:47 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:55.431 00:37:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:55.431 00:37:47 -- common/autotest_common.sh@10 -- # set +x 00:03:55.431 00:37:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:55.431 00:37:47 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:55.431 00:37:47 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:55.431 00:37:47 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:55.431 00:37:47 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:55.431 00:37:47 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:55.431 00:37:47 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:55.431 00:37:47 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:55.431 00:37:47 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.431 00:37:48 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:55.431 00:37:48 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:55.431 00:37:48 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:03:55.431 00:37:48 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 00:03:55.431 00:37:48 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:55.431 00:37:48 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:55.431 00:37:48 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:55.431 00:37:48 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:55.431 00:37:48 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:55.431 00:37:48 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:5e:00.0 00:03:55.431 00:37:48 -- common/autotest_common.sh@1578 -- # [[ -z 0000:5e:00.0 ]] 00:03:55.431 00:37:48 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=1502368 00:03:55.431 00:37:48 -- common/autotest_common.sh@1584 -- # waitforlisten 1502368 00:03:55.431 00:37:48 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:55.431 00:37:48 -- common/autotest_common.sh@817 -- # '[' -z 1502368 ']' 00:03:55.431 00:37:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.431 00:37:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:55.431 00:37:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.431 00:37:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:55.431 00:37:48 -- common/autotest_common.sh@10 -- # set +x 00:03:55.688 [2024-04-27 00:37:48.130803] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:03:55.688 [2024-04-27 00:37:48.130851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502368 ] 00:03:55.688 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.688 [2024-04-27 00:37:48.184876] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.688 [2024-04-27 00:37:48.262078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.254 00:37:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:56.254 00:37:48 -- common/autotest_common.sh@850 -- # return 0 00:03:56.254 00:37:48 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:56.254 00:37:48 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:56.254 00:37:48 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:59.546 nvme0n1 00:03:59.546 00:37:51 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:59.546 [2024-04-27 00:37:52.074799] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:59.546 request: 00:03:59.546 { 00:03:59.546 "nvme_ctrlr_name": "nvme0", 00:03:59.546 "password": "test", 00:03:59.546 "method": "bdev_nvme_opal_revert", 00:03:59.546 "req_id": 1 00:03:59.546 } 00:03:59.546 Got JSON-RPC error response 00:03:59.546 response: 00:03:59.546 { 00:03:59.546 "code": -32602, 00:03:59.546 "message": "Invalid parameters" 00:03:59.546 } 00:03:59.546 00:37:52 -- common/autotest_common.sh@1590 -- # true 00:03:59.546 00:37:52 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:59.546 00:37:52 -- common/autotest_common.sh@1594 -- # killprocess 1502368 00:03:59.546 00:37:52 -- common/autotest_common.sh@936 -- # '[' -z 1502368 ']' 00:03:59.546 00:37:52 -- common/autotest_common.sh@940 -- # kill -0 1502368 00:03:59.546 00:37:52 -- common/autotest_common.sh@941 -- # uname 00:03:59.546 00:37:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:59.546 00:37:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1502368 00:03:59.546 00:37:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:59.546 00:37:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:59.546 00:37:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1502368' 00:03:59.546 killing process with pid 1502368 00:03:59.546 00:37:52 -- common/autotest_common.sh@955 -- # kill 1502368 00:03:59.546 00:37:52 -- common/autotest_common.sh@960 -- # wait 1502368 00:04:01.492 00:37:53 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:01.492 00:37:53 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:01.492 00:37:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:01.492 00:37:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:01.492 00:37:53 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:01.492 00:37:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:01.492 00:37:53 -- common/autotest_common.sh@10 -- # set +x 00:04:01.492 00:37:53 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:01.492 00:37:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.492 00:37:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.492 00:37:53 -- common/autotest_common.sh@10 -- # set +x 00:04:01.492 ************************************ 00:04:01.492 START TEST env 00:04:01.492 ************************************ 00:04:01.492 00:37:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:01.492 * Looking for test storage... 00:04:01.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:01.492 00:37:53 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:01.492 00:37:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.492 00:37:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.492 00:37:53 -- common/autotest_common.sh@10 -- # set +x 00:04:01.492 ************************************ 00:04:01.492 START TEST env_memory 00:04:01.492 ************************************ 00:04:01.492 00:37:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:01.492 00:04:01.492 00:04:01.492 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.492 http://cunit.sourceforge.net/ 00:04:01.492 00:04:01.492 00:04:01.492 Suite: memory 00:04:01.492 Test: alloc and free memory map ...[2024-04-27 00:37:54.116006] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:01.492 passed 00:04:01.492 Test: mem map translation ...[2024-04-27 00:37:54.133944] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:01.492 [2024-04-27 00:37:54.133958] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:01.492 [2024-04-27 00:37:54.133994] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:01.492 [2024-04-27 00:37:54.133999] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:01.492 passed 00:04:01.492 Test: mem map registration ...[2024-04-27 00:37:54.170916] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:01.492 [2024-04-27 00:37:54.170940] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:01.492 passed 00:04:01.751 Test: mem map adjacent registrations ...passed 00:04:01.751 00:04:01.751 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.751 suites 1 1 n/a 0 0 00:04:01.751 tests 4 4 4 0 0 00:04:01.751 asserts 152 152 152 0 n/a 00:04:01.751 00:04:01.751 Elapsed time = 0.137 seconds 00:04:01.751 00:04:01.751 real 0m0.148s 00:04:01.751 user 0m0.140s 00:04:01.751 sys 0m0.008s 00:04:01.751 00:37:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:01.751 00:37:54 -- common/autotest_common.sh@10 -- # set +x 00:04:01.751 ************************************ 00:04:01.751 END TEST env_memory 00:04:01.751 ************************************ 00:04:01.751 00:37:54 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:01.751 00:37:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.751 00:37:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.751 00:37:54 -- common/autotest_common.sh@10 -- # set +x 00:04:01.751 ************************************ 00:04:01.751 START TEST env_vtophys 00:04:01.751 ************************************ 00:04:01.751 00:37:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:01.751 EAL: lib.eal log level changed from notice to debug 00:04:01.751 EAL: Detected lcore 0 as core 0 on socket 0 00:04:01.751 EAL: Detected lcore 1 as core 1 on socket 0 00:04:01.751 EAL: Detected lcore 2 as core 2 on socket 0 00:04:01.751 EAL: Detected lcore 3 as core 3 on socket 0 00:04:01.751 EAL: Detected lcore 4 as core 4 on socket 0 00:04:01.751 EAL: Detected lcore 5 as core 5 on socket 0 00:04:01.751 EAL: Detected lcore 6 as core 6 on socket 0 00:04:01.751 EAL: Detected lcore 7 as core 8 on socket 0 00:04:01.751 EAL: Detected lcore 8 as core 9 on socket 0 00:04:01.751 EAL: Detected lcore 9 as core 10 on socket 0 00:04:01.751 EAL: Detected lcore 10 as core 11 on socket 0 00:04:01.751 EAL: Detected lcore 11 as core 12 on socket 0 00:04:01.751 EAL: Detected lcore 12 as core 13 on socket 0 00:04:01.751 EAL: Detected lcore 13 as core 16 on socket 0 00:04:01.751 EAL: Detected lcore 14 as core 17 on socket 0 00:04:01.751 EAL: Detected lcore 15 as core 18 on socket 0 00:04:01.751 EAL: Detected lcore 16 as core 19 on socket 0 00:04:01.751 EAL: Detected lcore 17 as core 20 on socket 0 00:04:01.751 EAL: Detected lcore 18 as core 21 on socket 0 00:04:01.751 EAL: Detected lcore 19 as core 25 on socket 0 00:04:01.751 EAL: Detected lcore 20 as core 26 on socket 0 00:04:01.751 EAL: Detected lcore 21 as core 27 on socket 0 00:04:01.751 EAL: Detected lcore 22 as core 28 on socket 0 00:04:01.751 EAL: Detected lcore 23 as core 29 on socket 0 00:04:01.751 EAL: Detected lcore 24 as core 0 on socket 1 00:04:01.751 EAL: Detected lcore 25 as core 1 on socket 1 00:04:01.751 EAL: Detected lcore 26 as core 2 on socket 1 00:04:01.751 EAL: Detected lcore 27 as core 3 on socket 1 00:04:01.751 EAL: Detected lcore 28 as core 4 on socket 1 00:04:01.751 EAL: Detected lcore 29 as core 5 on socket 1 00:04:01.751 EAL: Detected lcore 30 as core 6 on socket 1 00:04:01.751 EAL: Detected lcore 31 as core 9 on socket 1 00:04:01.751 EAL: Detected lcore 32 as core 10 on socket 1 00:04:01.751 EAL: Detected lcore 33 as core 11 on socket 1 00:04:01.751 EAL: Detected lcore 34 as core 12 on socket 1 00:04:01.751 EAL: Detected lcore 35 as core 13 on socket 1 00:04:01.751 EAL: Detected lcore 36 as core 16 on socket 1 00:04:01.751 EAL: Detected lcore 37 as core 17 on socket 1 00:04:01.751 EAL: Detected lcore 38 as core 18 on socket 1 00:04:01.751 EAL: Detected lcore 39 as core 19 on socket 1 00:04:01.751 EAL: Detected lcore 40 as core 20 on socket 1 00:04:01.751 EAL: Detected lcore 41 as core 21 on socket 1 00:04:01.751 EAL: Detected lcore 42 as core 24 on socket 1 00:04:01.751 EAL: Detected lcore 43 as core 25 on socket 1 00:04:01.751 EAL: Detected lcore 44 as core 26 on socket 1 00:04:01.751 EAL: Detected lcore 45 as core 27 on socket 1 00:04:01.751 EAL: Detected lcore 46 as core 28 on socket 1 00:04:01.751 EAL: Detected lcore 47 as core 29 on socket 1 00:04:01.751 EAL: Detected lcore 48 as core 0 on socket 0 00:04:01.751 EAL: Detected lcore 49 as core 1 on socket 0 00:04:01.751 EAL: Detected lcore 50 as core 2 on socket 0 00:04:01.751 EAL: Detected lcore 51 as core 3 on socket 0 00:04:01.751 EAL: Detected lcore 52 as core 4 on socket 0 00:04:01.751 EAL: Detected lcore 53 as core 5 on socket 0 00:04:01.751 EAL: Detected lcore 54 as core 6 on socket 0 00:04:01.751 EAL: Detected lcore 55 as core 8 on socket 0 00:04:01.751 EAL: Detected lcore 56 as core 9 on socket 0 00:04:01.751 EAL: Detected lcore 57 as core 10 on socket 0 00:04:01.751 EAL: Detected lcore 58 as core 11 on socket 0 00:04:01.751 EAL: Detected lcore 59 as core 12 on socket 0 00:04:01.751 EAL: Detected lcore 60 as core 13 on socket 0 00:04:01.751 EAL: Detected lcore 61 as core 16 on socket 0 00:04:01.751 EAL: Detected lcore 62 as core 17 on socket 0 00:04:01.751 EAL: Detected lcore 63 as core 18 on socket 0 00:04:01.751 EAL: Detected lcore 64 as core 19 on socket 0 00:04:01.751 EAL: Detected lcore 65 as core 20 on socket 0 00:04:01.751 EAL: Detected lcore 66 as core 21 on socket 0 00:04:01.751 EAL: Detected lcore 67 as core 25 on socket 0 00:04:01.751 EAL: Detected lcore 68 as core 26 on socket 0 00:04:01.751 EAL: Detected lcore 69 as core 27 on socket 0 00:04:01.751 EAL: Detected lcore 70 as core 28 on socket 0 00:04:01.751 EAL: Detected lcore 71 as core 29 on socket 0 00:04:01.751 EAL: Detected lcore 72 as core 0 on socket 1 00:04:01.751 EAL: Detected lcore 73 as core 1 on socket 1 00:04:01.751 EAL: Detected lcore 74 as core 2 on socket 1 00:04:01.751 EAL: Detected lcore 75 as core 3 on socket 1 00:04:01.751 EAL: Detected lcore 76 as core 4 on socket 1 00:04:01.751 EAL: Detected lcore 77 as core 5 on socket 1 00:04:01.752 EAL: Detected lcore 78 as core 6 on socket 1 00:04:01.752 EAL: Detected lcore 79 as core 9 on socket 1 00:04:01.752 EAL: Detected lcore 80 as core 10 on socket 1 00:04:01.752 EAL: Detected lcore 81 as core 11 on socket 1 00:04:01.752 EAL: Detected lcore 82 as core 12 on socket 1 00:04:01.752 EAL: Detected lcore 83 as core 13 on socket 1 00:04:01.752 EAL: Detected lcore 84 as core 16 on socket 1 00:04:01.752 EAL: Detected lcore 85 as core 17 on socket 1 00:04:01.752 EAL: Detected lcore 86 as core 18 on socket 1 00:04:01.752 EAL: Detected lcore 87 as core 19 on socket 1 00:04:01.752 EAL: Detected lcore 88 as core 20 on socket 1 00:04:01.752 EAL: Detected lcore 89 as core 21 on socket 1 00:04:01.752 EAL: Detected lcore 90 as core 24 on socket 1 00:04:01.752 EAL: Detected lcore 91 as core 25 on socket 1 00:04:01.752 EAL: Detected lcore 92 as core 26 on socket 1 00:04:01.752 EAL: Detected lcore 93 as core 27 on socket 1 00:04:01.752 EAL: Detected lcore 94 as core 28 on socket 1 00:04:01.752 EAL: Detected lcore 95 as core 29 on socket 1 00:04:01.752 EAL: Maximum logical cores by configuration: 128 00:04:01.752 EAL: Detected CPU lcores: 96 00:04:01.752 EAL: Detected NUMA nodes: 2 00:04:01.752 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:01.752 EAL: Detected shared linkage of DPDK 00:04:01.752 EAL: No shared files mode enabled, IPC will be disabled 00:04:01.752 EAL: Bus pci wants IOVA as 'DC' 00:04:01.752 EAL: Buses did not request a specific IOVA mode. 00:04:01.752 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:01.752 EAL: Selected IOVA mode 'VA' 00:04:01.752 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.752 EAL: Probing VFIO support... 00:04:01.752 EAL: IOMMU type 1 (Type 1) is supported 00:04:01.752 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:01.752 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:01.752 EAL: VFIO support initialized 00:04:01.752 EAL: Ask a virtual area of 0x2e000 bytes 00:04:01.752 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:01.752 EAL: Setting up physically contiguous memory... 00:04:01.752 EAL: Setting maximum number of open files to 524288 00:04:01.752 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:01.752 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:01.752 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:01.752 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.752 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:01.752 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.752 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.752 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:01.752 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:01.752 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.752 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:01.752 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.752 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.752 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:01.752 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:01.752 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.752 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:01.752 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.752 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.752 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:01.752 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:01.752 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.752 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:01.752 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.752 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.752 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:01.752 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:01.752 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:01.752 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.752 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:01.752 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.752 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.752 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:01.752 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:01.752 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.752 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:01.752 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.752 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.752 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:01.752 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:01.752 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.752 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:01.752 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.752 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.752 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:01.752 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:01.752 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.752 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:01.752 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.752 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.752 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:01.752 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:01.752 EAL: Hugepages will be freed exactly as allocated. 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: TSC frequency is ~2300000 KHz 00:04:01.752 EAL: Main lcore 0 is ready (tid=7f7f500e1a00;cpuset=[0]) 00:04:01.752 EAL: Trying to obtain current memory policy. 00:04:01.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.752 EAL: Restoring previous memory policy: 0 00:04:01.752 EAL: request: mp_malloc_sync 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: Heap on socket 0 was expanded by 2MB 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:01.752 EAL: Mem event callback 'spdk:(nil)' registered 00:04:01.752 00:04:01.752 00:04:01.752 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.752 http://cunit.sourceforge.net/ 00:04:01.752 00:04:01.752 00:04:01.752 Suite: components_suite 00:04:01.752 Test: vtophys_malloc_test ...passed 00:04:01.752 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:01.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.752 EAL: Restoring previous memory policy: 4 00:04:01.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.752 EAL: request: mp_malloc_sync 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: Heap on socket 0 was expanded by 4MB 00:04:01.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.752 EAL: request: mp_malloc_sync 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: Heap on socket 0 was shrunk by 4MB 00:04:01.752 EAL: Trying to obtain current memory policy. 00:04:01.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.752 EAL: Restoring previous memory policy: 4 00:04:01.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.752 EAL: request: mp_malloc_sync 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: Heap on socket 0 was expanded by 6MB 00:04:01.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.752 EAL: request: mp_malloc_sync 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: Heap on socket 0 was shrunk by 6MB 00:04:01.752 EAL: Trying to obtain current memory policy. 00:04:01.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.752 EAL: Restoring previous memory policy: 4 00:04:01.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.752 EAL: request: mp_malloc_sync 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: Heap on socket 0 was expanded by 10MB 00:04:01.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.752 EAL: request: mp_malloc_sync 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: Heap on socket 0 was shrunk by 10MB 00:04:01.752 EAL: Trying to obtain current memory policy. 00:04:01.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.752 EAL: Restoring previous memory policy: 4 00:04:01.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.752 EAL: request: mp_malloc_sync 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: Heap on socket 0 was expanded by 18MB 00:04:01.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.752 EAL: request: mp_malloc_sync 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: Heap on socket 0 was shrunk by 18MB 00:04:01.752 EAL: Trying to obtain current memory policy. 00:04:01.752 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.752 EAL: Restoring previous memory policy: 4 00:04:01.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.752 EAL: request: mp_malloc_sync 00:04:01.752 EAL: No shared files mode enabled, IPC is disabled 00:04:01.752 EAL: Heap on socket 0 was expanded by 34MB 00:04:01.752 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.011 EAL: request: mp_malloc_sync 00:04:02.011 EAL: No shared files mode enabled, IPC is disabled 00:04:02.011 EAL: Heap on socket 0 was shrunk by 34MB 00:04:02.011 EAL: Trying to obtain current memory policy. 00:04:02.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.011 EAL: Restoring previous memory policy: 4 00:04:02.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.011 EAL: request: mp_malloc_sync 00:04:02.011 EAL: No shared files mode enabled, IPC is disabled 00:04:02.011 EAL: Heap on socket 0 was expanded by 66MB 00:04:02.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.011 EAL: request: mp_malloc_sync 00:04:02.011 EAL: No shared files mode enabled, IPC is disabled 00:04:02.011 EAL: Heap on socket 0 was shrunk by 66MB 00:04:02.011 EAL: Trying to obtain current memory policy. 00:04:02.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.011 EAL: Restoring previous memory policy: 4 00:04:02.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.011 EAL: request: mp_malloc_sync 00:04:02.011 EAL: No shared files mode enabled, IPC is disabled 00:04:02.011 EAL: Heap on socket 0 was expanded by 130MB 00:04:02.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.011 EAL: request: mp_malloc_sync 00:04:02.011 EAL: No shared files mode enabled, IPC is disabled 00:04:02.011 EAL: Heap on socket 0 was shrunk by 130MB 00:04:02.011 EAL: Trying to obtain current memory policy. 00:04:02.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.011 EAL: Restoring previous memory policy: 4 00:04:02.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.011 EAL: request: mp_malloc_sync 00:04:02.011 EAL: No shared files mode enabled, IPC is disabled 00:04:02.011 EAL: Heap on socket 0 was expanded by 258MB 00:04:02.011 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.011 EAL: request: mp_malloc_sync 00:04:02.011 EAL: No shared files mode enabled, IPC is disabled 00:04:02.011 EAL: Heap on socket 0 was shrunk by 258MB 00:04:02.011 EAL: Trying to obtain current memory policy. 00:04:02.011 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.268 EAL: Restoring previous memory policy: 4 00:04:02.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.268 EAL: request: mp_malloc_sync 00:04:02.268 EAL: No shared files mode enabled, IPC is disabled 00:04:02.268 EAL: Heap on socket 0 was expanded by 514MB 00:04:02.268 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.268 EAL: request: mp_malloc_sync 00:04:02.268 EAL: No shared files mode enabled, IPC is disabled 00:04:02.268 EAL: Heap on socket 0 was shrunk by 514MB 00:04:02.268 EAL: Trying to obtain current memory policy. 00:04:02.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.526 EAL: Restoring previous memory policy: 4 00:04:02.526 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.526 EAL: request: mp_malloc_sync 00:04:02.526 EAL: No shared files mode enabled, IPC is disabled 00:04:02.526 EAL: Heap on socket 0 was expanded by 1026MB 00:04:02.784 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.784 EAL: request: mp_malloc_sync 00:04:02.784 EAL: No shared files mode enabled, IPC is disabled 00:04:02.784 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:02.784 passed 00:04:02.784 00:04:02.784 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.784 suites 1 1 n/a 0 0 00:04:02.784 tests 2 2 2 0 0 00:04:02.784 asserts 497 497 497 0 n/a 00:04:02.784 00:04:02.784 Elapsed time = 0.960 seconds 00:04:02.784 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.784 EAL: request: mp_malloc_sync 00:04:02.784 EAL: No shared files mode enabled, IPC is disabled 00:04:02.784 EAL: Heap on socket 0 was shrunk by 2MB 00:04:02.784 EAL: No shared files mode enabled, IPC is disabled 00:04:02.784 EAL: No shared files mode enabled, IPC is disabled 00:04:02.784 EAL: No shared files mode enabled, IPC is disabled 00:04:02.784 00:04:02.784 real 0m1.077s 00:04:02.784 user 0m0.631s 00:04:02.784 sys 0m0.413s 00:04:02.784 00:37:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:02.784 00:37:55 -- common/autotest_common.sh@10 -- # set +x 00:04:02.784 ************************************ 00:04:02.784 END TEST env_vtophys 00:04:02.784 ************************************ 00:04:02.784 00:37:55 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:02.784 00:37:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.784 00:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.784 00:37:55 -- common/autotest_common.sh@10 -- # set +x 00:04:03.042 ************************************ 00:04:03.042 START TEST env_pci 00:04:03.042 ************************************ 00:04:03.042 00:37:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:03.042 00:04:03.042 00:04:03.042 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.042 http://cunit.sourceforge.net/ 00:04:03.042 00:04:03.042 00:04:03.042 Suite: pci 00:04:03.042 Test: pci_hook ...[2024-04-27 00:37:55.599972] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1503731 has claimed it 00:04:03.042 EAL: Cannot find device (10000:00:01.0) 00:04:03.042 EAL: Failed to attach device on primary process 00:04:03.042 passed 00:04:03.042 00:04:03.042 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.042 suites 1 1 n/a 0 0 00:04:03.042 tests 1 1 1 0 0 00:04:03.042 asserts 25 25 25 0 n/a 00:04:03.042 00:04:03.042 Elapsed time = 0.028 seconds 00:04:03.042 00:04:03.042 real 0m0.048s 00:04:03.042 user 0m0.013s 00:04:03.042 sys 0m0.035s 00:04:03.042 00:37:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.042 00:37:55 -- common/autotest_common.sh@10 -- # set +x 00:04:03.042 ************************************ 00:04:03.042 END TEST env_pci 00:04:03.042 ************************************ 00:04:03.042 00:37:55 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:03.042 00:37:55 -- env/env.sh@15 -- # uname 00:04:03.042 00:37:55 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:03.042 00:37:55 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:03.042 00:37:55 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:03.042 00:37:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:03.042 00:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.042 00:37:55 -- common/autotest_common.sh@10 -- # set +x 00:04:03.301 ************************************ 00:04:03.301 START TEST env_dpdk_post_init 00:04:03.301 ************************************ 00:04:03.301 00:37:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:03.301 EAL: Detected CPU lcores: 96 00:04:03.301 EAL: Detected NUMA nodes: 2 00:04:03.301 EAL: Detected shared linkage of DPDK 00:04:03.301 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:03.301 EAL: Selected IOVA mode 'VA' 00:04:03.301 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.301 EAL: VFIO support initialized 00:04:03.301 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:03.301 EAL: Using IOMMU type 1 (Type 1) 00:04:03.301 EAL: Ignore mapping IO port bar(1) 00:04:03.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:03.301 EAL: Ignore mapping IO port bar(1) 00:04:03.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:03.301 EAL: Ignore mapping IO port bar(1) 00:04:03.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:03.301 EAL: Ignore mapping IO port bar(1) 00:04:03.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:03.301 EAL: Ignore mapping IO port bar(1) 00:04:03.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:03.301 EAL: Ignore mapping IO port bar(1) 00:04:03.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:03.301 EAL: Ignore mapping IO port bar(1) 00:04:03.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:03.301 EAL: Ignore mapping IO port bar(1) 00:04:03.301 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:04.236 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:04.236 EAL: Ignore mapping IO port bar(1) 00:04:04.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:04.236 EAL: Ignore mapping IO port bar(1) 00:04:04.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:04.236 EAL: Ignore mapping IO port bar(1) 00:04:04.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:04.236 EAL: Ignore mapping IO port bar(1) 00:04:04.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:04.236 EAL: Ignore mapping IO port bar(1) 00:04:04.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:04.236 EAL: Ignore mapping IO port bar(1) 00:04:04.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:04.236 EAL: Ignore mapping IO port bar(1) 00:04:04.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:04.236 EAL: Ignore mapping IO port bar(1) 00:04:04.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:07.518 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:07.518 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:07.518 Starting DPDK initialization... 00:04:07.518 Starting SPDK post initialization... 00:04:07.518 SPDK NVMe probe 00:04:07.518 Attaching to 0000:5e:00.0 00:04:07.518 Attached to 0000:5e:00.0 00:04:07.518 Cleaning up... 00:04:07.518 00:04:07.518 real 0m4.333s 00:04:07.518 user 0m3.304s 00:04:07.518 sys 0m0.102s 00:04:07.518 00:38:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:07.518 00:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:07.518 ************************************ 00:04:07.518 END TEST env_dpdk_post_init 00:04:07.518 ************************************ 00:04:07.518 00:38:00 -- env/env.sh@26 -- # uname 00:04:07.518 00:38:00 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:07.518 00:38:00 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.518 00:38:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.518 00:38:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.518 00:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:07.775 ************************************ 00:04:07.775 START TEST env_mem_callbacks 00:04:07.775 ************************************ 00:04:07.775 00:38:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.775 EAL: Detected CPU lcores: 96 00:04:07.775 EAL: Detected NUMA nodes: 2 00:04:07.775 EAL: Detected shared linkage of DPDK 00:04:07.775 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.775 EAL: Selected IOVA mode 'VA' 00:04:07.775 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.775 EAL: VFIO support initialized 00:04:07.775 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.775 00:04:07.775 00:04:07.775 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.775 http://cunit.sourceforge.net/ 00:04:07.775 00:04:07.775 00:04:07.775 Suite: memory 00:04:07.775 Test: test ... 00:04:07.775 register 0x200000200000 2097152 00:04:07.775 malloc 3145728 00:04:07.775 register 0x200000400000 4194304 00:04:07.775 buf 0x200000500000 len 3145728 PASSED 00:04:07.775 malloc 64 00:04:07.775 buf 0x2000004fff40 len 64 PASSED 00:04:07.775 malloc 4194304 00:04:07.775 register 0x200000800000 6291456 00:04:07.775 buf 0x200000a00000 len 4194304 PASSED 00:04:07.776 free 0x200000500000 3145728 00:04:07.776 free 0x2000004fff40 64 00:04:07.776 unregister 0x200000400000 4194304 PASSED 00:04:07.776 free 0x200000a00000 4194304 00:04:07.776 unregister 0x200000800000 6291456 PASSED 00:04:07.776 malloc 8388608 00:04:07.776 register 0x200000400000 10485760 00:04:07.776 buf 0x200000600000 len 8388608 PASSED 00:04:07.776 free 0x200000600000 8388608 00:04:07.776 unregister 0x200000400000 10485760 PASSED 00:04:07.776 passed 00:04:07.776 00:04:07.776 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.776 suites 1 1 n/a 0 0 00:04:07.776 tests 1 1 1 0 0 00:04:07.776 asserts 15 15 15 0 n/a 00:04:07.776 00:04:07.776 Elapsed time = 0.005 seconds 00:04:07.776 00:04:07.776 real 0m0.049s 00:04:07.776 user 0m0.014s 00:04:07.776 sys 0m0.035s 00:04:07.776 00:38:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:07.776 00:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:07.776 ************************************ 00:04:07.776 END TEST env_mem_callbacks 00:04:07.776 ************************************ 00:04:07.776 00:04:07.776 real 0m6.451s 00:04:07.776 user 0m4.410s 00:04:07.776 sys 0m1.040s 00:04:07.776 00:38:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:07.776 00:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:07.776 ************************************ 00:04:07.776 END TEST env 00:04:07.776 ************************************ 00:04:07.776 00:38:00 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:07.776 00:38:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.776 00:38:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.776 00:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:07.776 ************************************ 00:04:07.776 START TEST rpc 00:04:07.776 ************************************ 00:04:07.776 00:38:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:08.034 * Looking for test storage... 00:04:08.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.034 00:38:00 -- rpc/rpc.sh@65 -- # spdk_pid=1504786 00:04:08.034 00:38:00 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.034 00:38:00 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:08.034 00:38:00 -- rpc/rpc.sh@67 -- # waitforlisten 1504786 00:04:08.034 00:38:00 -- common/autotest_common.sh@817 -- # '[' -z 1504786 ']' 00:04:08.034 00:38:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.034 00:38:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:08.034 00:38:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.034 00:38:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:08.034 00:38:00 -- common/autotest_common.sh@10 -- # set +x 00:04:08.034 [2024-04-27 00:38:00.606822] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:08.034 [2024-04-27 00:38:00.606869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504786 ] 00:04:08.034 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.034 [2024-04-27 00:38:00.661431] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.292 [2024-04-27 00:38:00.737357] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:08.292 [2024-04-27 00:38:00.737395] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1504786' to capture a snapshot of events at runtime. 00:04:08.292 [2024-04-27 00:38:00.737402] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:08.292 [2024-04-27 00:38:00.737409] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:08.292 [2024-04-27 00:38:00.737414] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1504786 for offline analysis/debug. 00:04:08.292 [2024-04-27 00:38:00.737437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.857 00:38:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:08.857 00:38:01 -- common/autotest_common.sh@850 -- # return 0 00:04:08.857 00:38:01 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.857 00:38:01 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.857 00:38:01 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.857 00:38:01 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.857 00:38:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.857 00:38:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.857 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:08.857 ************************************ 00:04:08.857 START TEST rpc_integrity 00:04:08.857 ************************************ 00:04:08.857 00:38:01 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:08.857 00:38:01 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.857 00:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:08.857 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:08.857 00:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:08.857 00:38:01 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.857 00:38:01 -- rpc/rpc.sh@13 -- # jq length 00:04:09.115 00:38:01 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.115 00:38:01 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.115 00:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.115 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.115 00:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.115 00:38:01 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:09.115 00:38:01 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.115 00:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.115 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.115 00:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.115 00:38:01 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.115 { 00:04:09.115 "name": "Malloc0", 00:04:09.115 "aliases": [ 00:04:09.115 "147c0408-b85a-4303-a519-11be9f1f73f4" 00:04:09.115 ], 00:04:09.115 "product_name": "Malloc disk", 00:04:09.115 "block_size": 512, 00:04:09.115 "num_blocks": 16384, 00:04:09.115 "uuid": "147c0408-b85a-4303-a519-11be9f1f73f4", 00:04:09.115 "assigned_rate_limits": { 00:04:09.115 "rw_ios_per_sec": 0, 00:04:09.115 "rw_mbytes_per_sec": 0, 00:04:09.115 "r_mbytes_per_sec": 0, 00:04:09.115 "w_mbytes_per_sec": 0 00:04:09.115 }, 00:04:09.115 "claimed": false, 00:04:09.115 "zoned": false, 00:04:09.115 "supported_io_types": { 00:04:09.115 "read": true, 00:04:09.115 "write": true, 00:04:09.115 "unmap": true, 00:04:09.115 "write_zeroes": true, 00:04:09.115 "flush": true, 00:04:09.115 "reset": true, 00:04:09.115 "compare": false, 00:04:09.115 "compare_and_write": false, 00:04:09.115 "abort": true, 00:04:09.115 "nvme_admin": false, 00:04:09.115 "nvme_io": false 00:04:09.115 }, 00:04:09.115 "memory_domains": [ 00:04:09.115 { 00:04:09.115 "dma_device_id": "system", 00:04:09.115 "dma_device_type": 1 00:04:09.115 }, 00:04:09.115 { 00:04:09.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.115 "dma_device_type": 2 00:04:09.115 } 00:04:09.115 ], 00:04:09.115 "driver_specific": {} 00:04:09.115 } 00:04:09.115 ]' 00:04:09.115 00:38:01 -- rpc/rpc.sh@17 -- # jq length 00:04:09.115 00:38:01 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.115 00:38:01 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:09.115 00:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.115 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.115 [2024-04-27 00:38:01.665826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:09.115 [2024-04-27 00:38:01.665858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.115 [2024-04-27 00:38:01.665869] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe304e0 00:04:09.115 [2024-04-27 00:38:01.665875] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.115 [2024-04-27 00:38:01.666961] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.115 [2024-04-27 00:38:01.666984] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.115 Passthru0 00:04:09.115 00:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.115 00:38:01 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.115 00:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.115 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.115 00:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.115 00:38:01 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.115 { 00:04:09.115 "name": "Malloc0", 00:04:09.115 "aliases": [ 00:04:09.115 "147c0408-b85a-4303-a519-11be9f1f73f4" 00:04:09.115 ], 00:04:09.115 "product_name": "Malloc disk", 00:04:09.115 "block_size": 512, 00:04:09.115 "num_blocks": 16384, 00:04:09.115 "uuid": "147c0408-b85a-4303-a519-11be9f1f73f4", 00:04:09.115 "assigned_rate_limits": { 00:04:09.115 "rw_ios_per_sec": 0, 00:04:09.115 "rw_mbytes_per_sec": 0, 00:04:09.115 "r_mbytes_per_sec": 0, 00:04:09.115 "w_mbytes_per_sec": 0 00:04:09.115 }, 00:04:09.115 "claimed": true, 00:04:09.115 "claim_type": "exclusive_write", 00:04:09.115 "zoned": false, 00:04:09.115 "supported_io_types": { 00:04:09.115 "read": true, 00:04:09.115 "write": true, 00:04:09.115 "unmap": true, 00:04:09.115 "write_zeroes": true, 00:04:09.115 "flush": true, 00:04:09.115 "reset": true, 00:04:09.115 "compare": false, 00:04:09.115 "compare_and_write": false, 00:04:09.115 "abort": true, 00:04:09.115 "nvme_admin": false, 00:04:09.115 "nvme_io": false 00:04:09.115 }, 00:04:09.115 "memory_domains": [ 00:04:09.115 { 00:04:09.115 "dma_device_id": "system", 00:04:09.115 "dma_device_type": 1 00:04:09.115 }, 00:04:09.115 { 00:04:09.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.115 "dma_device_type": 2 00:04:09.115 } 00:04:09.115 ], 00:04:09.115 "driver_specific": {} 00:04:09.115 }, 00:04:09.115 { 00:04:09.115 "name": "Passthru0", 00:04:09.115 "aliases": [ 00:04:09.115 "e0ae6195-7275-5056-9d36-76bb7bcd2150" 00:04:09.115 ], 00:04:09.115 "product_name": "passthru", 00:04:09.115 "block_size": 512, 00:04:09.115 "num_blocks": 16384, 00:04:09.115 "uuid": "e0ae6195-7275-5056-9d36-76bb7bcd2150", 00:04:09.115 "assigned_rate_limits": { 00:04:09.115 "rw_ios_per_sec": 0, 00:04:09.115 "rw_mbytes_per_sec": 0, 00:04:09.115 "r_mbytes_per_sec": 0, 00:04:09.115 "w_mbytes_per_sec": 0 00:04:09.115 }, 00:04:09.115 "claimed": false, 00:04:09.115 "zoned": false, 00:04:09.115 "supported_io_types": { 00:04:09.115 "read": true, 00:04:09.115 "write": true, 00:04:09.115 "unmap": true, 00:04:09.115 "write_zeroes": true, 00:04:09.115 "flush": true, 00:04:09.115 "reset": true, 00:04:09.115 "compare": false, 00:04:09.115 "compare_and_write": false, 00:04:09.115 "abort": true, 00:04:09.115 "nvme_admin": false, 00:04:09.115 "nvme_io": false 00:04:09.115 }, 00:04:09.115 "memory_domains": [ 00:04:09.115 { 00:04:09.115 "dma_device_id": "system", 00:04:09.115 "dma_device_type": 1 00:04:09.115 }, 00:04:09.115 { 00:04:09.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.115 "dma_device_type": 2 00:04:09.115 } 00:04:09.115 ], 00:04:09.115 "driver_specific": { 00:04:09.115 "passthru": { 00:04:09.115 "name": "Passthru0", 00:04:09.115 "base_bdev_name": "Malloc0" 00:04:09.115 } 00:04:09.115 } 00:04:09.115 } 00:04:09.115 ]' 00:04:09.115 00:38:01 -- rpc/rpc.sh@21 -- # jq length 00:04:09.115 00:38:01 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.115 00:38:01 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.115 00:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.115 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.115 00:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.115 00:38:01 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:09.115 00:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.115 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.115 00:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.115 00:38:01 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.115 00:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.115 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.115 00:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.115 00:38:01 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.115 00:38:01 -- rpc/rpc.sh@26 -- # jq length 00:04:09.115 00:38:01 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.115 00:04:09.115 real 0m0.269s 00:04:09.115 user 0m0.167s 00:04:09.115 sys 0m0.035s 00:04:09.115 00:38:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.115 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.115 ************************************ 00:04:09.115 END TEST rpc_integrity 00:04:09.115 ************************************ 00:04:09.373 00:38:01 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:09.373 00:38:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.373 00:38:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.373 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.373 ************************************ 00:04:09.373 START TEST rpc_plugins 00:04:09.373 ************************************ 00:04:09.373 00:38:01 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:09.373 00:38:01 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:09.373 00:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.373 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.373 00:38:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.373 00:38:01 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:09.373 00:38:01 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:09.373 00:38:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.373 00:38:01 -- common/autotest_common.sh@10 -- # set +x 00:04:09.373 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.373 00:38:02 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:09.373 { 00:04:09.373 "name": "Malloc1", 00:04:09.373 "aliases": [ 00:04:09.373 "1e7b98b0-ff3d-47f8-816b-2fbac8bc259f" 00:04:09.373 ], 00:04:09.373 "product_name": "Malloc disk", 00:04:09.373 "block_size": 4096, 00:04:09.373 "num_blocks": 256, 00:04:09.373 "uuid": "1e7b98b0-ff3d-47f8-816b-2fbac8bc259f", 00:04:09.373 "assigned_rate_limits": { 00:04:09.373 "rw_ios_per_sec": 0, 00:04:09.373 "rw_mbytes_per_sec": 0, 00:04:09.373 "r_mbytes_per_sec": 0, 00:04:09.373 "w_mbytes_per_sec": 0 00:04:09.373 }, 00:04:09.373 "claimed": false, 00:04:09.373 "zoned": false, 00:04:09.373 "supported_io_types": { 00:04:09.373 "read": true, 00:04:09.373 "write": true, 00:04:09.373 "unmap": true, 00:04:09.373 "write_zeroes": true, 00:04:09.373 "flush": true, 00:04:09.373 "reset": true, 00:04:09.373 "compare": false, 00:04:09.373 "compare_and_write": false, 00:04:09.373 "abort": true, 00:04:09.373 "nvme_admin": false, 00:04:09.373 "nvme_io": false 00:04:09.373 }, 00:04:09.373 "memory_domains": [ 00:04:09.373 { 00:04:09.373 "dma_device_id": "system", 00:04:09.373 "dma_device_type": 1 00:04:09.373 }, 00:04:09.373 { 00:04:09.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.373 "dma_device_type": 2 00:04:09.373 } 00:04:09.373 ], 00:04:09.373 "driver_specific": {} 00:04:09.373 } 00:04:09.373 ]' 00:04:09.373 00:38:02 -- rpc/rpc.sh@32 -- # jq length 00:04:09.373 00:38:02 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:09.373 00:38:02 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:09.373 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.373 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:09.373 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.373 00:38:02 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:09.373 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.373 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:09.631 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.631 00:38:02 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:09.631 00:38:02 -- rpc/rpc.sh@36 -- # jq length 00:04:09.631 00:38:02 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:09.631 00:04:09.631 real 0m0.139s 00:04:09.631 user 0m0.082s 00:04:09.631 sys 0m0.021s 00:04:09.631 00:38:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.631 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:09.631 ************************************ 00:04:09.631 END TEST rpc_plugins 00:04:09.631 ************************************ 00:04:09.631 00:38:02 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:09.631 00:38:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.631 00:38:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.631 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:09.631 ************************************ 00:04:09.631 START TEST rpc_trace_cmd_test 00:04:09.631 ************************************ 00:04:09.631 00:38:02 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:09.631 00:38:02 -- rpc/rpc.sh@40 -- # local info 00:04:09.631 00:38:02 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:09.631 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:09.631 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:09.631 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:09.631 00:38:02 -- rpc/rpc.sh@42 -- # info='{ 00:04:09.631 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1504786", 00:04:09.631 "tpoint_group_mask": "0x8", 00:04:09.631 "iscsi_conn": { 00:04:09.631 "mask": "0x2", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "scsi": { 00:04:09.631 "mask": "0x4", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "bdev": { 00:04:09.631 "mask": "0x8", 00:04:09.631 "tpoint_mask": "0xffffffffffffffff" 00:04:09.631 }, 00:04:09.631 "nvmf_rdma": { 00:04:09.631 "mask": "0x10", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "nvmf_tcp": { 00:04:09.631 "mask": "0x20", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "ftl": { 00:04:09.631 "mask": "0x40", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "blobfs": { 00:04:09.631 "mask": "0x80", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "dsa": { 00:04:09.631 "mask": "0x200", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "thread": { 00:04:09.631 "mask": "0x400", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "nvme_pcie": { 00:04:09.631 "mask": "0x800", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "iaa": { 00:04:09.631 "mask": "0x1000", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "nvme_tcp": { 00:04:09.631 "mask": "0x2000", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "bdev_nvme": { 00:04:09.631 "mask": "0x4000", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 }, 00:04:09.631 "sock": { 00:04:09.631 "mask": "0x8000", 00:04:09.631 "tpoint_mask": "0x0" 00:04:09.631 } 00:04:09.631 }' 00:04:09.631 00:38:02 -- rpc/rpc.sh@43 -- # jq length 00:04:09.889 00:38:02 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:09.889 00:38:02 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:09.889 00:38:02 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:09.889 00:38:02 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:09.889 00:38:02 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:09.889 00:38:02 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:09.889 00:38:02 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:09.889 00:38:02 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.889 00:38:02 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.889 00:04:09.889 real 0m0.232s 00:04:09.889 user 0m0.200s 00:04:09.889 sys 0m0.024s 00:04:09.889 00:38:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.889 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:09.889 ************************************ 00:04:09.889 END TEST rpc_trace_cmd_test 00:04:09.889 ************************************ 00:04:09.889 00:38:02 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.889 00:38:02 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.889 00:38:02 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.889 00:38:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.889 00:38:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.889 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.147 ************************************ 00:04:10.147 START TEST rpc_daemon_integrity 00:04:10.147 ************************************ 00:04:10.147 00:38:02 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:10.147 00:38:02 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:10.147 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.147 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.147 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.147 00:38:02 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:10.147 00:38:02 -- rpc/rpc.sh@13 -- # jq length 00:04:10.147 00:38:02 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:10.147 00:38:02 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:10.147 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.147 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.147 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.147 00:38:02 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:10.147 00:38:02 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:10.147 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.147 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.147 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.147 00:38:02 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:10.147 { 00:04:10.147 "name": "Malloc2", 00:04:10.147 "aliases": [ 00:04:10.147 "577d69d9-0048-4a73-8eb7-ba6b0328f058" 00:04:10.147 ], 00:04:10.147 "product_name": "Malloc disk", 00:04:10.147 "block_size": 512, 00:04:10.147 "num_blocks": 16384, 00:04:10.147 "uuid": "577d69d9-0048-4a73-8eb7-ba6b0328f058", 00:04:10.147 "assigned_rate_limits": { 00:04:10.147 "rw_ios_per_sec": 0, 00:04:10.147 "rw_mbytes_per_sec": 0, 00:04:10.147 "r_mbytes_per_sec": 0, 00:04:10.147 "w_mbytes_per_sec": 0 00:04:10.147 }, 00:04:10.147 "claimed": false, 00:04:10.147 "zoned": false, 00:04:10.147 "supported_io_types": { 00:04:10.147 "read": true, 00:04:10.147 "write": true, 00:04:10.147 "unmap": true, 00:04:10.147 "write_zeroes": true, 00:04:10.147 "flush": true, 00:04:10.147 "reset": true, 00:04:10.147 "compare": false, 00:04:10.147 "compare_and_write": false, 00:04:10.147 "abort": true, 00:04:10.147 "nvme_admin": false, 00:04:10.147 "nvme_io": false 00:04:10.147 }, 00:04:10.147 "memory_domains": [ 00:04:10.147 { 00:04:10.147 "dma_device_id": "system", 00:04:10.147 "dma_device_type": 1 00:04:10.147 }, 00:04:10.147 { 00:04:10.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.147 "dma_device_type": 2 00:04:10.147 } 00:04:10.147 ], 00:04:10.147 "driver_specific": {} 00:04:10.147 } 00:04:10.147 ]' 00:04:10.147 00:38:02 -- rpc/rpc.sh@17 -- # jq length 00:04:10.147 00:38:02 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:10.147 00:38:02 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:10.147 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.147 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.147 [2024-04-27 00:38:02.804952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:10.147 [2024-04-27 00:38:02.804982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:10.147 [2024-04-27 00:38:02.804993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe309b0 00:04:10.147 [2024-04-27 00:38:02.804999] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:10.147 [2024-04-27 00:38:02.805959] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:10.147 [2024-04-27 00:38:02.805981] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:10.147 Passthru0 00:04:10.147 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.147 00:38:02 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:10.147 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.147 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.147 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.147 00:38:02 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:10.147 { 00:04:10.147 "name": "Malloc2", 00:04:10.147 "aliases": [ 00:04:10.147 "577d69d9-0048-4a73-8eb7-ba6b0328f058" 00:04:10.147 ], 00:04:10.147 "product_name": "Malloc disk", 00:04:10.147 "block_size": 512, 00:04:10.147 "num_blocks": 16384, 00:04:10.147 "uuid": "577d69d9-0048-4a73-8eb7-ba6b0328f058", 00:04:10.147 "assigned_rate_limits": { 00:04:10.147 "rw_ios_per_sec": 0, 00:04:10.147 "rw_mbytes_per_sec": 0, 00:04:10.147 "r_mbytes_per_sec": 0, 00:04:10.147 "w_mbytes_per_sec": 0 00:04:10.147 }, 00:04:10.147 "claimed": true, 00:04:10.147 "claim_type": "exclusive_write", 00:04:10.147 "zoned": false, 00:04:10.147 "supported_io_types": { 00:04:10.147 "read": true, 00:04:10.147 "write": true, 00:04:10.147 "unmap": true, 00:04:10.147 "write_zeroes": true, 00:04:10.147 "flush": true, 00:04:10.147 "reset": true, 00:04:10.147 "compare": false, 00:04:10.147 "compare_and_write": false, 00:04:10.147 "abort": true, 00:04:10.147 "nvme_admin": false, 00:04:10.147 "nvme_io": false 00:04:10.147 }, 00:04:10.147 "memory_domains": [ 00:04:10.147 { 00:04:10.147 "dma_device_id": "system", 00:04:10.147 "dma_device_type": 1 00:04:10.147 }, 00:04:10.147 { 00:04:10.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.147 "dma_device_type": 2 00:04:10.147 } 00:04:10.147 ], 00:04:10.147 "driver_specific": {} 00:04:10.147 }, 00:04:10.147 { 00:04:10.147 "name": "Passthru0", 00:04:10.147 "aliases": [ 00:04:10.147 "31ca8aa1-ff81-54d6-b2bd-24d1ea2c747e" 00:04:10.147 ], 00:04:10.147 "product_name": "passthru", 00:04:10.147 "block_size": 512, 00:04:10.147 "num_blocks": 16384, 00:04:10.147 "uuid": "31ca8aa1-ff81-54d6-b2bd-24d1ea2c747e", 00:04:10.147 "assigned_rate_limits": { 00:04:10.147 "rw_ios_per_sec": 0, 00:04:10.147 "rw_mbytes_per_sec": 0, 00:04:10.147 "r_mbytes_per_sec": 0, 00:04:10.147 "w_mbytes_per_sec": 0 00:04:10.147 }, 00:04:10.147 "claimed": false, 00:04:10.147 "zoned": false, 00:04:10.147 "supported_io_types": { 00:04:10.147 "read": true, 00:04:10.147 "write": true, 00:04:10.147 "unmap": true, 00:04:10.147 "write_zeroes": true, 00:04:10.147 "flush": true, 00:04:10.147 "reset": true, 00:04:10.147 "compare": false, 00:04:10.147 "compare_and_write": false, 00:04:10.147 "abort": true, 00:04:10.147 "nvme_admin": false, 00:04:10.147 "nvme_io": false 00:04:10.147 }, 00:04:10.147 "memory_domains": [ 00:04:10.147 { 00:04:10.147 "dma_device_id": "system", 00:04:10.147 "dma_device_type": 1 00:04:10.147 }, 00:04:10.147 { 00:04:10.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:10.147 "dma_device_type": 2 00:04:10.147 } 00:04:10.147 ], 00:04:10.147 "driver_specific": { 00:04:10.147 "passthru": { 00:04:10.147 "name": "Passthru0", 00:04:10.147 "base_bdev_name": "Malloc2" 00:04:10.147 } 00:04:10.147 } 00:04:10.147 } 00:04:10.147 ]' 00:04:10.147 00:38:02 -- rpc/rpc.sh@21 -- # jq length 00:04:10.404 00:38:02 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:10.404 00:38:02 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:10.404 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.404 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.404 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.404 00:38:02 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:10.404 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.404 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.404 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.404 00:38:02 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:10.404 00:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:10.404 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.404 00:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:10.404 00:38:02 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:10.404 00:38:02 -- rpc/rpc.sh@26 -- # jq length 00:04:10.404 00:38:02 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:10.404 00:04:10.404 real 0m0.267s 00:04:10.404 user 0m0.173s 00:04:10.404 sys 0m0.031s 00:04:10.404 00:38:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.404 00:38:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.404 ************************************ 00:04:10.404 END TEST rpc_daemon_integrity 00:04:10.404 ************************************ 00:04:10.404 00:38:02 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:10.404 00:38:02 -- rpc/rpc.sh@84 -- # killprocess 1504786 00:04:10.404 00:38:02 -- common/autotest_common.sh@936 -- # '[' -z 1504786 ']' 00:04:10.404 00:38:02 -- common/autotest_common.sh@940 -- # kill -0 1504786 00:04:10.404 00:38:02 -- common/autotest_common.sh@941 -- # uname 00:04:10.404 00:38:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:10.404 00:38:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1504786 00:04:10.404 00:38:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:10.404 00:38:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:10.404 00:38:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1504786' 00:04:10.404 killing process with pid 1504786 00:04:10.404 00:38:03 -- common/autotest_common.sh@955 -- # kill 1504786 00:04:10.404 00:38:03 -- common/autotest_common.sh@960 -- # wait 1504786 00:04:10.660 00:04:10.660 real 0m2.879s 00:04:10.660 user 0m3.772s 00:04:10.660 sys 0m0.818s 00:04:10.660 00:38:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:10.660 00:38:03 -- common/autotest_common.sh@10 -- # set +x 00:04:10.660 ************************************ 00:04:10.660 END TEST rpc 00:04:10.660 ************************************ 00:04:10.984 00:38:03 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.984 00:38:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.984 00:38:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.984 00:38:03 -- common/autotest_common.sh@10 -- # set +x 00:04:10.984 ************************************ 00:04:10.984 START TEST skip_rpc 00:04:10.984 ************************************ 00:04:10.984 00:38:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.984 * Looking for test storage... 00:04:10.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:10.984 00:38:03 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.984 00:38:03 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:10.984 00:38:03 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:10.984 00:38:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.984 00:38:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.984 00:38:03 -- common/autotest_common.sh@10 -- # set +x 00:04:11.242 ************************************ 00:04:11.242 START TEST skip_rpc 00:04:11.242 ************************************ 00:04:11.242 00:38:03 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:11.242 00:38:03 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1505471 00:04:11.242 00:38:03 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.242 00:38:03 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:11.242 00:38:03 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:11.242 [2024-04-27 00:38:03.763273] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:11.242 [2024-04-27 00:38:03.763307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505471 ] 00:04:11.242 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.242 [2024-04-27 00:38:03.816582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.242 [2024-04-27 00:38:03.887371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.499 00:38:08 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:16.499 00:38:08 -- common/autotest_common.sh@638 -- # local es=0 00:04:16.499 00:38:08 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:16.499 00:38:08 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:16.499 00:38:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:16.499 00:38:08 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:16.499 00:38:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:16.499 00:38:08 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:16.499 00:38:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:16.499 00:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:16.499 00:38:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:16.499 00:38:08 -- common/autotest_common.sh@641 -- # es=1 00:04:16.499 00:38:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:16.499 00:38:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:16.499 00:38:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:16.499 00:38:08 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:16.499 00:38:08 -- rpc/skip_rpc.sh@23 -- # killprocess 1505471 00:04:16.499 00:38:08 -- common/autotest_common.sh@936 -- # '[' -z 1505471 ']' 00:04:16.499 00:38:08 -- common/autotest_common.sh@940 -- # kill -0 1505471 00:04:16.499 00:38:08 -- common/autotest_common.sh@941 -- # uname 00:04:16.499 00:38:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:16.499 00:38:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1505471 00:04:16.499 00:38:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:16.499 00:38:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:16.499 00:38:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1505471' 00:04:16.499 killing process with pid 1505471 00:04:16.499 00:38:08 -- common/autotest_common.sh@955 -- # kill 1505471 00:04:16.499 00:38:08 -- common/autotest_common.sh@960 -- # wait 1505471 00:04:16.499 00:04:16.499 real 0m5.391s 00:04:16.499 user 0m5.175s 00:04:16.499 sys 0m0.247s 00:04:16.499 00:38:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:16.499 00:38:09 -- common/autotest_common.sh@10 -- # set +x 00:04:16.499 ************************************ 00:04:16.499 END TEST skip_rpc 00:04:16.499 ************************************ 00:04:16.499 00:38:09 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:16.499 00:38:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.499 00:38:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.499 00:38:09 -- common/autotest_common.sh@10 -- # set +x 00:04:16.757 ************************************ 00:04:16.757 START TEST skip_rpc_with_json 00:04:16.757 ************************************ 00:04:16.757 00:38:09 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:16.757 00:38:09 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:16.757 00:38:09 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1506423 00:04:16.757 00:38:09 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.757 00:38:09 -- rpc/skip_rpc.sh@31 -- # waitforlisten 1506423 00:04:16.757 00:38:09 -- common/autotest_common.sh@817 -- # '[' -z 1506423 ']' 00:04:16.757 00:38:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.757 00:38:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:16.757 00:38:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.757 00:38:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:16.757 00:38:09 -- common/autotest_common.sh@10 -- # set +x 00:04:16.757 00:38:09 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.757 [2024-04-27 00:38:09.294778] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:16.757 [2024-04-27 00:38:09.294816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506423 ] 00:04:16.757 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.757 [2024-04-27 00:38:09.347266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.757 [2024-04-27 00:38:09.424967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.695 00:38:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:17.695 00:38:10 -- common/autotest_common.sh@850 -- # return 0 00:04:17.695 00:38:10 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:17.695 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.695 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.695 [2024-04-27 00:38:10.084725] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:17.695 request: 00:04:17.695 { 00:04:17.695 "trtype": "tcp", 00:04:17.695 "method": "nvmf_get_transports", 00:04:17.695 "req_id": 1 00:04:17.695 } 00:04:17.695 Got JSON-RPC error response 00:04:17.695 response: 00:04:17.695 { 00:04:17.695 "code": -19, 00:04:17.695 "message": "No such device" 00:04:17.695 } 00:04:17.695 00:38:10 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:17.695 00:38:10 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:17.695 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.695 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.695 [2024-04-27 00:38:10.092811] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:17.695 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.695 00:38:10 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:17.695 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.695 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.695 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.695 00:38:10 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.695 { 00:04:17.695 "subsystems": [ 00:04:17.695 { 00:04:17.695 "subsystem": "vfio_user_target", 00:04:17.695 "config": null 00:04:17.695 }, 00:04:17.695 { 00:04:17.695 "subsystem": "keyring", 00:04:17.695 "config": [] 00:04:17.695 }, 00:04:17.695 { 00:04:17.695 "subsystem": "iobuf", 00:04:17.695 "config": [ 00:04:17.695 { 00:04:17.695 "method": "iobuf_set_options", 00:04:17.695 "params": { 00:04:17.695 "small_pool_count": 8192, 00:04:17.695 "large_pool_count": 1024, 00:04:17.695 "small_bufsize": 8192, 00:04:17.695 "large_bufsize": 135168 00:04:17.695 } 00:04:17.695 } 00:04:17.695 ] 00:04:17.695 }, 00:04:17.695 { 00:04:17.695 "subsystem": "sock", 00:04:17.695 "config": [ 00:04:17.695 { 00:04:17.695 "method": "sock_impl_set_options", 00:04:17.695 "params": { 00:04:17.695 "impl_name": "posix", 00:04:17.695 "recv_buf_size": 2097152, 00:04:17.695 "send_buf_size": 2097152, 00:04:17.695 "enable_recv_pipe": true, 00:04:17.695 "enable_quickack": false, 00:04:17.695 "enable_placement_id": 0, 00:04:17.695 "enable_zerocopy_send_server": true, 00:04:17.695 "enable_zerocopy_send_client": false, 00:04:17.695 "zerocopy_threshold": 0, 00:04:17.695 "tls_version": 0, 00:04:17.695 "enable_ktls": false 00:04:17.695 } 00:04:17.695 }, 00:04:17.695 { 00:04:17.695 "method": "sock_impl_set_options", 00:04:17.695 "params": { 00:04:17.695 "impl_name": "ssl", 00:04:17.695 "recv_buf_size": 4096, 00:04:17.695 "send_buf_size": 4096, 00:04:17.695 "enable_recv_pipe": true, 00:04:17.695 "enable_quickack": false, 00:04:17.695 "enable_placement_id": 0, 00:04:17.695 "enable_zerocopy_send_server": true, 00:04:17.695 "enable_zerocopy_send_client": false, 00:04:17.695 "zerocopy_threshold": 0, 00:04:17.695 "tls_version": 0, 00:04:17.695 "enable_ktls": false 00:04:17.695 } 00:04:17.695 } 00:04:17.695 ] 00:04:17.695 }, 00:04:17.695 { 00:04:17.695 "subsystem": "vmd", 00:04:17.695 "config": [] 00:04:17.695 }, 00:04:17.695 { 00:04:17.695 "subsystem": "accel", 00:04:17.695 "config": [ 00:04:17.695 { 00:04:17.695 "method": "accel_set_options", 00:04:17.695 "params": { 00:04:17.695 "small_cache_size": 128, 00:04:17.695 "large_cache_size": 16, 00:04:17.695 "task_count": 2048, 00:04:17.695 "sequence_count": 2048, 00:04:17.695 "buf_count": 2048 00:04:17.695 } 00:04:17.695 } 00:04:17.695 ] 00:04:17.695 }, 00:04:17.695 { 00:04:17.695 "subsystem": "bdev", 00:04:17.695 "config": [ 00:04:17.695 { 00:04:17.695 "method": "bdev_set_options", 00:04:17.695 "params": { 00:04:17.695 "bdev_io_pool_size": 65535, 00:04:17.695 "bdev_io_cache_size": 256, 00:04:17.695 "bdev_auto_examine": true, 00:04:17.695 "iobuf_small_cache_size": 128, 00:04:17.695 "iobuf_large_cache_size": 16 00:04:17.695 } 00:04:17.695 }, 00:04:17.695 { 00:04:17.695 "method": "bdev_raid_set_options", 00:04:17.695 "params": { 00:04:17.695 "process_window_size_kb": 1024 00:04:17.695 } 00:04:17.695 }, 00:04:17.695 { 00:04:17.695 "method": "bdev_iscsi_set_options", 00:04:17.695 "params": { 00:04:17.695 "timeout_sec": 30 00:04:17.695 } 00:04:17.695 }, 00:04:17.695 { 00:04:17.695 "method": "bdev_nvme_set_options", 00:04:17.695 "params": { 00:04:17.695 "action_on_timeout": "none", 00:04:17.695 "timeout_us": 0, 00:04:17.695 "timeout_admin_us": 0, 00:04:17.695 "keep_alive_timeout_ms": 10000, 00:04:17.695 "arbitration_burst": 0, 00:04:17.695 "low_priority_weight": 0, 00:04:17.695 "medium_priority_weight": 0, 00:04:17.695 "high_priority_weight": 0, 00:04:17.695 "nvme_adminq_poll_period_us": 10000, 00:04:17.695 "nvme_ioq_poll_period_us": 0, 00:04:17.695 "io_queue_requests": 0, 00:04:17.695 "delay_cmd_submit": true, 00:04:17.695 "transport_retry_count": 4, 00:04:17.695 "bdev_retry_count": 3, 00:04:17.695 "transport_ack_timeout": 0, 00:04:17.695 "ctrlr_loss_timeout_sec": 0, 00:04:17.695 "reconnect_delay_sec": 0, 00:04:17.695 "fast_io_fail_timeout_sec": 0, 00:04:17.695 "disable_auto_failback": false, 00:04:17.695 "generate_uuids": false, 00:04:17.696 "transport_tos": 0, 00:04:17.696 "nvme_error_stat": false, 00:04:17.696 "rdma_srq_size": 0, 00:04:17.696 "io_path_stat": false, 00:04:17.696 "allow_accel_sequence": false, 00:04:17.696 "rdma_max_cq_size": 0, 00:04:17.696 "rdma_cm_event_timeout_ms": 0, 00:04:17.696 "dhchap_digests": [ 00:04:17.696 "sha256", 00:04:17.696 "sha384", 00:04:17.696 "sha512" 00:04:17.696 ], 00:04:17.696 "dhchap_dhgroups": [ 00:04:17.696 "null", 00:04:17.696 "ffdhe2048", 00:04:17.696 "ffdhe3072", 00:04:17.696 "ffdhe4096", 00:04:17.696 "ffdhe6144", 00:04:17.696 "ffdhe8192" 00:04:17.696 ] 00:04:17.696 } 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "method": "bdev_nvme_set_hotplug", 00:04:17.696 "params": { 00:04:17.696 "period_us": 100000, 00:04:17.696 "enable": false 00:04:17.696 } 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "method": "bdev_wait_for_examine" 00:04:17.696 } 00:04:17.696 ] 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "subsystem": "scsi", 00:04:17.696 "config": null 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "subsystem": "scheduler", 00:04:17.696 "config": [ 00:04:17.696 { 00:04:17.696 "method": "framework_set_scheduler", 00:04:17.696 "params": { 00:04:17.696 "name": "static" 00:04:17.696 } 00:04:17.696 } 00:04:17.696 ] 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "subsystem": "vhost_scsi", 00:04:17.696 "config": [] 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "subsystem": "vhost_blk", 00:04:17.696 "config": [] 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "subsystem": "ublk", 00:04:17.696 "config": [] 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "subsystem": "nbd", 00:04:17.696 "config": [] 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "subsystem": "nvmf", 00:04:17.696 "config": [ 00:04:17.696 { 00:04:17.696 "method": "nvmf_set_config", 00:04:17.696 "params": { 00:04:17.696 "discovery_filter": "match_any", 00:04:17.696 "admin_cmd_passthru": { 00:04:17.696 "identify_ctrlr": false 00:04:17.696 } 00:04:17.696 } 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "method": "nvmf_set_max_subsystems", 00:04:17.696 "params": { 00:04:17.696 "max_subsystems": 1024 00:04:17.696 } 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "method": "nvmf_set_crdt", 00:04:17.696 "params": { 00:04:17.696 "crdt1": 0, 00:04:17.696 "crdt2": 0, 00:04:17.696 "crdt3": 0 00:04:17.696 } 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "method": "nvmf_create_transport", 00:04:17.696 "params": { 00:04:17.696 "trtype": "TCP", 00:04:17.696 "max_queue_depth": 128, 00:04:17.696 "max_io_qpairs_per_ctrlr": 127, 00:04:17.696 "in_capsule_data_size": 4096, 00:04:17.696 "max_io_size": 131072, 00:04:17.696 "io_unit_size": 131072, 00:04:17.696 "max_aq_depth": 128, 00:04:17.696 "num_shared_buffers": 511, 00:04:17.696 "buf_cache_size": 4294967295, 00:04:17.696 "dif_insert_or_strip": false, 00:04:17.696 "zcopy": false, 00:04:17.696 "c2h_success": true, 00:04:17.696 "sock_priority": 0, 00:04:17.696 "abort_timeout_sec": 1, 00:04:17.696 "ack_timeout": 0, 00:04:17.696 "data_wr_pool_size": 0 00:04:17.696 } 00:04:17.696 } 00:04:17.696 ] 00:04:17.696 }, 00:04:17.696 { 00:04:17.696 "subsystem": "iscsi", 00:04:17.696 "config": [ 00:04:17.696 { 00:04:17.696 "method": "iscsi_set_options", 00:04:17.696 "params": { 00:04:17.696 "node_base": "iqn.2016-06.io.spdk", 00:04:17.696 "max_sessions": 128, 00:04:17.696 "max_connections_per_session": 2, 00:04:17.696 "max_queue_depth": 64, 00:04:17.696 "default_time2wait": 2, 00:04:17.696 "default_time2retain": 20, 00:04:17.696 "first_burst_length": 8192, 00:04:17.696 "immediate_data": true, 00:04:17.696 "allow_duplicated_isid": false, 00:04:17.696 "error_recovery_level": 0, 00:04:17.696 "nop_timeout": 60, 00:04:17.696 "nop_in_interval": 30, 00:04:17.696 "disable_chap": false, 00:04:17.696 "require_chap": false, 00:04:17.696 "mutual_chap": false, 00:04:17.696 "chap_group": 0, 00:04:17.696 "max_large_datain_per_connection": 64, 00:04:17.696 "max_r2t_per_connection": 4, 00:04:17.696 "pdu_pool_size": 36864, 00:04:17.696 "immediate_data_pool_size": 16384, 00:04:17.696 "data_out_pool_size": 2048 00:04:17.696 } 00:04:17.696 } 00:04:17.696 ] 00:04:17.696 } 00:04:17.696 ] 00:04:17.696 } 00:04:17.696 00:38:10 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:17.696 00:38:10 -- rpc/skip_rpc.sh@40 -- # killprocess 1506423 00:04:17.696 00:38:10 -- common/autotest_common.sh@936 -- # '[' -z 1506423 ']' 00:04:17.696 00:38:10 -- common/autotest_common.sh@940 -- # kill -0 1506423 00:04:17.696 00:38:10 -- common/autotest_common.sh@941 -- # uname 00:04:17.696 00:38:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:17.696 00:38:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1506423 00:04:17.696 00:38:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:17.696 00:38:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:17.696 00:38:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1506423' 00:04:17.696 killing process with pid 1506423 00:04:17.696 00:38:10 -- common/autotest_common.sh@955 -- # kill 1506423 00:04:17.696 00:38:10 -- common/autotest_common.sh@960 -- # wait 1506423 00:04:17.955 00:38:10 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1506661 00:04:17.955 00:38:10 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:17.955 00:38:10 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.230 00:38:15 -- rpc/skip_rpc.sh@50 -- # killprocess 1506661 00:04:23.230 00:38:15 -- common/autotest_common.sh@936 -- # '[' -z 1506661 ']' 00:04:23.230 00:38:15 -- common/autotest_common.sh@940 -- # kill -0 1506661 00:04:23.230 00:38:15 -- common/autotest_common.sh@941 -- # uname 00:04:23.230 00:38:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:23.230 00:38:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1506661 00:04:23.230 00:38:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:23.230 00:38:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:23.230 00:38:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1506661' 00:04:23.230 killing process with pid 1506661 00:04:23.230 00:38:15 -- common/autotest_common.sh@955 -- # kill 1506661 00:04:23.230 00:38:15 -- common/autotest_common.sh@960 -- # wait 1506661 00:04:23.491 00:38:15 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:23.491 00:38:16 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:23.491 00:04:23.491 real 0m6.754s 00:04:23.491 user 0m6.599s 00:04:23.491 sys 0m0.545s 00:04:23.491 00:38:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.491 00:38:16 -- common/autotest_common.sh@10 -- # set +x 00:04:23.491 ************************************ 00:04:23.491 END TEST skip_rpc_with_json 00:04:23.491 ************************************ 00:04:23.491 00:38:16 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:23.491 00:38:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.491 00:38:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.491 00:38:16 -- common/autotest_common.sh@10 -- # set +x 00:04:23.491 ************************************ 00:04:23.491 START TEST skip_rpc_with_delay 00:04:23.491 ************************************ 00:04:23.491 00:38:16 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:23.491 00:38:16 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:23.491 00:38:16 -- common/autotest_common.sh@638 -- # local es=0 00:04:23.491 00:38:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:23.491 00:38:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.491 00:38:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:23.491 00:38:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.491 00:38:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:23.491 00:38:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.491 00:38:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:23.491 00:38:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.491 00:38:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:23.491 00:38:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:23.750 [2024-04-27 00:38:16.196394] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:23.750 [2024-04-27 00:38:16.196449] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:23.750 00:38:16 -- common/autotest_common.sh@641 -- # es=1 00:04:23.750 00:38:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:23.750 00:38:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:23.750 00:38:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:23.750 00:04:23.750 real 0m0.059s 00:04:23.750 user 0m0.041s 00:04:23.750 sys 0m0.016s 00:04:23.750 00:38:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:23.750 00:38:16 -- common/autotest_common.sh@10 -- # set +x 00:04:23.750 ************************************ 00:04:23.750 END TEST skip_rpc_with_delay 00:04:23.750 ************************************ 00:04:23.750 00:38:16 -- rpc/skip_rpc.sh@77 -- # uname 00:04:23.750 00:38:16 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:23.750 00:38:16 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:23.750 00:38:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:23.750 00:38:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:23.750 00:38:16 -- common/autotest_common.sh@10 -- # set +x 00:04:23.750 ************************************ 00:04:23.750 START TEST exit_on_failed_rpc_init 00:04:23.750 ************************************ 00:04:23.750 00:38:16 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:23.750 00:38:16 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1507653 00:04:23.750 00:38:16 -- rpc/skip_rpc.sh@63 -- # waitforlisten 1507653 00:04:23.750 00:38:16 -- common/autotest_common.sh@817 -- # '[' -z 1507653 ']' 00:04:23.750 00:38:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.750 00:38:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:23.750 00:38:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.750 00:38:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:23.750 00:38:16 -- common/autotest_common.sh@10 -- # set +x 00:04:23.750 00:38:16 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.750 [2024-04-27 00:38:16.407507] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:23.750 [2024-04-27 00:38:16.407545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507653 ] 00:04:23.750 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.009 [2024-04-27 00:38:16.459619] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.009 [2024-04-27 00:38:16.538054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.577 00:38:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:24.577 00:38:17 -- common/autotest_common.sh@850 -- # return 0 00:04:24.577 00:38:17 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.577 00:38:17 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:24.577 00:38:17 -- common/autotest_common.sh@638 -- # local es=0 00:04:24.577 00:38:17 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:24.577 00:38:17 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.577 00:38:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:24.577 00:38:17 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.577 00:38:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:24.577 00:38:17 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.577 00:38:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:24.577 00:38:17 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:24.577 00:38:17 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:24.577 00:38:17 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:24.577 [2024-04-27 00:38:17.247436] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:24.577 [2024-04-27 00:38:17.247484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507877 ] 00:04:24.577 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.837 [2024-04-27 00:38:17.300441] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.837 [2024-04-27 00:38:17.370370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.837 [2024-04-27 00:38:17.370433] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:24.837 [2024-04-27 00:38:17.370441] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:24.837 [2024-04-27 00:38:17.370446] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:24.837 00:38:17 -- common/autotest_common.sh@641 -- # es=234 00:04:24.837 00:38:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:24.837 00:38:17 -- common/autotest_common.sh@650 -- # es=106 00:04:24.837 00:38:17 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:24.837 00:38:17 -- common/autotest_common.sh@658 -- # es=1 00:04:24.837 00:38:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:24.837 00:38:17 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:24.837 00:38:17 -- rpc/skip_rpc.sh@70 -- # killprocess 1507653 00:04:24.837 00:38:17 -- common/autotest_common.sh@936 -- # '[' -z 1507653 ']' 00:04:24.837 00:38:17 -- common/autotest_common.sh@940 -- # kill -0 1507653 00:04:24.837 00:38:17 -- common/autotest_common.sh@941 -- # uname 00:04:24.837 00:38:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:24.837 00:38:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1507653 00:04:24.837 00:38:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:24.837 00:38:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:24.837 00:38:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1507653' 00:04:24.837 killing process with pid 1507653 00:04:24.837 00:38:17 -- common/autotest_common.sh@955 -- # kill 1507653 00:04:24.837 00:38:17 -- common/autotest_common.sh@960 -- # wait 1507653 00:04:25.406 00:04:25.406 real 0m1.468s 00:04:25.406 user 0m1.709s 00:04:25.406 sys 0m0.363s 00:04:25.406 00:38:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.406 00:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:25.406 ************************************ 00:04:25.406 END TEST exit_on_failed_rpc_init 00:04:25.406 ************************************ 00:04:25.406 00:38:17 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:25.406 00:04:25.406 real 0m14.348s 00:04:25.406 user 0m13.765s 00:04:25.406 sys 0m1.566s 00:04:25.406 00:38:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.406 00:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:25.406 ************************************ 00:04:25.406 END TEST skip_rpc 00:04:25.406 ************************************ 00:04:25.406 00:38:17 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:25.406 00:38:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.406 00:38:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.406 00:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:25.406 ************************************ 00:04:25.406 START TEST rpc_client 00:04:25.406 ************************************ 00:04:25.406 00:38:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:25.665 * Looking for test storage... 00:04:25.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:25.665 00:38:18 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:25.665 OK 00:04:25.665 00:38:18 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:25.665 00:04:25.665 real 0m0.105s 00:04:25.665 user 0m0.046s 00:04:25.665 sys 0m0.068s 00:04:25.665 00:38:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.665 00:38:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.665 ************************************ 00:04:25.665 END TEST rpc_client 00:04:25.665 ************************************ 00:04:25.665 00:38:18 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:25.665 00:38:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.665 00:38:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.665 00:38:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.665 ************************************ 00:04:25.665 START TEST json_config 00:04:25.665 ************************************ 00:04:25.665 00:38:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:25.925 00:38:18 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:25.925 00:38:18 -- nvmf/common.sh@7 -- # uname -s 00:04:25.925 00:38:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:25.925 00:38:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:25.925 00:38:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:25.925 00:38:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:25.926 00:38:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:25.926 00:38:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:25.926 00:38:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:25.926 00:38:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:25.926 00:38:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:25.926 00:38:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:25.926 00:38:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:25.926 00:38:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:25.926 00:38:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:25.926 00:38:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:25.926 00:38:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:25.926 00:38:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:25.926 00:38:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:25.926 00:38:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:25.926 00:38:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.926 00:38:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.926 00:38:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.926 00:38:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.926 00:38:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.926 00:38:18 -- paths/export.sh@5 -- # export PATH 00:04:25.926 00:38:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.926 00:38:18 -- nvmf/common.sh@47 -- # : 0 00:04:25.926 00:38:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:25.926 00:38:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:25.926 00:38:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:25.926 00:38:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:25.926 00:38:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:25.926 00:38:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:25.926 00:38:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:25.926 00:38:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:25.926 00:38:18 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:25.926 00:38:18 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:25.926 00:38:18 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:25.926 00:38:18 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:25.926 00:38:18 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:25.926 00:38:18 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:25.926 00:38:18 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:25.926 00:38:18 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:25.926 00:38:18 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:25.926 00:38:18 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:25.926 00:38:18 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:25.926 00:38:18 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:25.926 00:38:18 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:25.926 00:38:18 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:25.926 00:38:18 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:25.926 00:38:18 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:25.926 INFO: JSON configuration test init 00:04:25.926 00:38:18 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:25.926 00:38:18 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:25.926 00:38:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:25.926 00:38:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.926 00:38:18 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:25.926 00:38:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:25.926 00:38:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.926 00:38:18 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:25.926 00:38:18 -- json_config/common.sh@9 -- # local app=target 00:04:25.926 00:38:18 -- json_config/common.sh@10 -- # shift 00:04:25.926 00:38:18 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:25.926 00:38:18 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:25.926 00:38:18 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:25.926 00:38:18 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.926 00:38:18 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.926 00:38:18 -- json_config/common.sh@22 -- # app_pid["$app"]=1508230 00:04:25.926 00:38:18 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:25.926 Waiting for target to run... 00:04:25.926 00:38:18 -- json_config/common.sh@25 -- # waitforlisten 1508230 /var/tmp/spdk_tgt.sock 00:04:25.926 00:38:18 -- common/autotest_common.sh@817 -- # '[' -z 1508230 ']' 00:04:25.926 00:38:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.926 00:38:18 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:25.926 00:38:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:25.926 00:38:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.926 00:38:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:25.926 00:38:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.926 [2024-04-27 00:38:18.464225] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:25.926 [2024-04-27 00:38:18.464273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508230 ] 00:04:25.926 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.495 [2024-04-27 00:38:18.898831] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.495 [2024-04-27 00:38:18.986499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.755 00:38:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:26.755 00:38:19 -- common/autotest_common.sh@850 -- # return 0 00:04:26.755 00:38:19 -- json_config/common.sh@26 -- # echo '' 00:04:26.755 00:04:26.755 00:38:19 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:26.755 00:38:19 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:26.755 00:38:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:26.755 00:38:19 -- common/autotest_common.sh@10 -- # set +x 00:04:26.755 00:38:19 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:26.755 00:38:19 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:26.755 00:38:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:26.755 00:38:19 -- common/autotest_common.sh@10 -- # set +x 00:04:26.755 00:38:19 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:26.755 00:38:19 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:26.755 00:38:19 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:30.048 00:38:22 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:30.048 00:38:22 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:30.048 00:38:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:30.048 00:38:22 -- common/autotest_common.sh@10 -- # set +x 00:04:30.048 00:38:22 -- json_config/json_config.sh@45 -- # local ret=0 00:04:30.048 00:38:22 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:30.048 00:38:22 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:30.048 00:38:22 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:30.048 00:38:22 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:30.048 00:38:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:30.048 00:38:22 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:30.048 00:38:22 -- json_config/json_config.sh@48 -- # local get_types 00:04:30.048 00:38:22 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:30.048 00:38:22 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:30.048 00:38:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:30.048 00:38:22 -- common/autotest_common.sh@10 -- # set +x 00:04:30.048 00:38:22 -- json_config/json_config.sh@55 -- # return 0 00:04:30.048 00:38:22 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:30.048 00:38:22 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:30.048 00:38:22 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:30.048 00:38:22 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:30.048 00:38:22 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:30.048 00:38:22 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:30.048 00:38:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:30.048 00:38:22 -- common/autotest_common.sh@10 -- # set +x 00:04:30.048 00:38:22 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:30.048 00:38:22 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:30.048 00:38:22 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:30.048 00:38:22 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.048 00:38:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:30.048 MallocForNvmf0 00:04:30.048 00:38:22 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.048 00:38:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:30.307 MallocForNvmf1 00:04:30.307 00:38:22 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.307 00:38:22 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:30.566 [2024-04-27 00:38:23.042959] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.566 00:38:23 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.566 00:38:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:30.566 00:38:23 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.566 00:38:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:30.825 00:38:23 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:30.825 00:38:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:31.083 00:38:23 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.083 00:38:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:31.083 [2024-04-27 00:38:23.737132] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.083 00:38:23 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:31.083 00:38:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:31.083 00:38:23 -- common/autotest_common.sh@10 -- # set +x 00:04:31.342 00:38:23 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:31.342 00:38:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:31.342 00:38:23 -- common/autotest_common.sh@10 -- # set +x 00:04:31.342 00:38:23 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:31.342 00:38:23 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.342 00:38:23 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.342 MallocBdevForConfigChangeCheck 00:04:31.342 00:38:23 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:31.342 00:38:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:31.342 00:38:23 -- common/autotest_common.sh@10 -- # set +x 00:04:31.342 00:38:24 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:31.342 00:38:24 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.917 00:38:24 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:31.917 INFO: shutting down applications... 00:04:31.917 00:38:24 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:31.917 00:38:24 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:31.917 00:38:24 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:31.917 00:38:24 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:33.294 Calling clear_iscsi_subsystem 00:04:33.294 Calling clear_nvmf_subsystem 00:04:33.294 Calling clear_nbd_subsystem 00:04:33.295 Calling clear_ublk_subsystem 00:04:33.295 Calling clear_vhost_blk_subsystem 00:04:33.295 Calling clear_vhost_scsi_subsystem 00:04:33.295 Calling clear_bdev_subsystem 00:04:33.295 00:38:25 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:33.295 00:38:25 -- json_config/json_config.sh@343 -- # count=100 00:04:33.295 00:38:25 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:33.295 00:38:25 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:33.295 00:38:25 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:33.295 00:38:25 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:33.554 00:38:26 -- json_config/json_config.sh@345 -- # break 00:04:33.554 00:38:26 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:33.554 00:38:26 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:33.554 00:38:26 -- json_config/common.sh@31 -- # local app=target 00:04:33.554 00:38:26 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.554 00:38:26 -- json_config/common.sh@35 -- # [[ -n 1508230 ]] 00:04:33.554 00:38:26 -- json_config/common.sh@38 -- # kill -SIGINT 1508230 00:04:33.554 00:38:26 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.554 00:38:26 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.554 00:38:26 -- json_config/common.sh@41 -- # kill -0 1508230 00:04:33.554 00:38:26 -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.123 00:38:26 -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.123 00:38:26 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.123 00:38:26 -- json_config/common.sh@41 -- # kill -0 1508230 00:04:34.123 00:38:26 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.123 00:38:26 -- json_config/common.sh@43 -- # break 00:04:34.123 00:38:26 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.123 00:38:26 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.123 SPDK target shutdown done 00:04:34.123 00:38:26 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:34.123 INFO: relaunching applications... 00:04:34.123 00:38:26 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.123 00:38:26 -- json_config/common.sh@9 -- # local app=target 00:04:34.123 00:38:26 -- json_config/common.sh@10 -- # shift 00:04:34.123 00:38:26 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.123 00:38:26 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.123 00:38:26 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.123 00:38:26 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.123 00:38:26 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.123 00:38:26 -- json_config/common.sh@22 -- # app_pid["$app"]=1509734 00:04:34.123 00:38:26 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.123 Waiting for target to run... 00:04:34.123 00:38:26 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.123 00:38:26 -- json_config/common.sh@25 -- # waitforlisten 1509734 /var/tmp/spdk_tgt.sock 00:04:34.123 00:38:26 -- common/autotest_common.sh@817 -- # '[' -z 1509734 ']' 00:04:34.123 00:38:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.123 00:38:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:34.123 00:38:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.123 00:38:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:34.123 00:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:34.123 [2024-04-27 00:38:26.759224] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:34.123 [2024-04-27 00:38:26.759283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509734 ] 00:04:34.123 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.691 [2024-04-27 00:38:27.194748] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.691 [2024-04-27 00:38:27.284823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.033 [2024-04-27 00:38:30.285155] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.033 [2024-04-27 00:38:30.317476] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.314 00:38:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:38.314 00:38:30 -- common/autotest_common.sh@850 -- # return 0 00:04:38.314 00:38:30 -- json_config/common.sh@26 -- # echo '' 00:04:38.314 00:04:38.314 00:38:30 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:38.314 00:38:30 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:38.314 INFO: Checking if target configuration is the same... 00:04:38.314 00:38:30 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:38.314 00:38:30 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.314 00:38:30 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.314 + '[' 2 -ne 2 ']' 00:04:38.314 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:38.314 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:38.314 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:38.314 +++ basename /dev/fd/62 00:04:38.314 ++ mktemp /tmp/62.XXX 00:04:38.314 + tmp_file_1=/tmp/62.JKa 00:04:38.314 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.314 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.314 + tmp_file_2=/tmp/spdk_tgt_config.json.1dC 00:04:38.314 + ret=0 00:04:38.314 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:38.583 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:38.583 + diff -u /tmp/62.JKa /tmp/spdk_tgt_config.json.1dC 00:04:38.583 + echo 'INFO: JSON config files are the same' 00:04:38.583 INFO: JSON config files are the same 00:04:38.583 + rm /tmp/62.JKa /tmp/spdk_tgt_config.json.1dC 00:04:38.583 + exit 0 00:04:38.583 00:38:31 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:38.583 00:38:31 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:38.583 INFO: changing configuration and checking if this can be detected... 00:04:38.583 00:38:31 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:38.583 00:38:31 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:38.843 00:38:31 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.843 00:38:31 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:38.843 00:38:31 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.843 + '[' 2 -ne 2 ']' 00:04:38.843 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:38.843 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:38.843 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:38.843 +++ basename /dev/fd/62 00:04:38.843 ++ mktemp /tmp/62.XXX 00:04:38.843 + tmp_file_1=/tmp/62.jwz 00:04:38.843 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.843 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.843 + tmp_file_2=/tmp/spdk_tgt_config.json.wyp 00:04:38.843 + ret=0 00:04:38.843 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.102 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:39.102 + diff -u /tmp/62.jwz /tmp/spdk_tgt_config.json.wyp 00:04:39.360 + ret=1 00:04:39.360 + echo '=== Start of file: /tmp/62.jwz ===' 00:04:39.360 + cat /tmp/62.jwz 00:04:39.360 + echo '=== End of file: /tmp/62.jwz ===' 00:04:39.360 + echo '' 00:04:39.360 + echo '=== Start of file: /tmp/spdk_tgt_config.json.wyp ===' 00:04:39.360 + cat /tmp/spdk_tgt_config.json.wyp 00:04:39.360 + echo '=== End of file: /tmp/spdk_tgt_config.json.wyp ===' 00:04:39.360 + echo '' 00:04:39.360 + rm /tmp/62.jwz /tmp/spdk_tgt_config.json.wyp 00:04:39.360 + exit 1 00:04:39.360 00:38:31 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:39.360 INFO: configuration change detected. 00:04:39.360 00:38:31 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:39.360 00:38:31 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:39.360 00:38:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:39.360 00:38:31 -- common/autotest_common.sh@10 -- # set +x 00:04:39.360 00:38:31 -- json_config/json_config.sh@307 -- # local ret=0 00:04:39.360 00:38:31 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:39.360 00:38:31 -- json_config/json_config.sh@317 -- # [[ -n 1509734 ]] 00:04:39.360 00:38:31 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:39.360 00:38:31 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:39.361 00:38:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:39.361 00:38:31 -- common/autotest_common.sh@10 -- # set +x 00:04:39.361 00:38:31 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:39.361 00:38:31 -- json_config/json_config.sh@193 -- # uname -s 00:04:39.361 00:38:31 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:39.361 00:38:31 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:39.361 00:38:31 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:39.361 00:38:31 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:39.361 00:38:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:39.361 00:38:31 -- common/autotest_common.sh@10 -- # set +x 00:04:39.361 00:38:31 -- json_config/json_config.sh@323 -- # killprocess 1509734 00:04:39.361 00:38:31 -- common/autotest_common.sh@936 -- # '[' -z 1509734 ']' 00:04:39.361 00:38:31 -- common/autotest_common.sh@940 -- # kill -0 1509734 00:04:39.361 00:38:31 -- common/autotest_common.sh@941 -- # uname 00:04:39.361 00:38:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:39.361 00:38:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1509734 00:04:39.361 00:38:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:39.361 00:38:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:39.361 00:38:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1509734' 00:04:39.361 killing process with pid 1509734 00:04:39.361 00:38:31 -- common/autotest_common.sh@955 -- # kill 1509734 00:04:39.361 00:38:31 -- common/autotest_common.sh@960 -- # wait 1509734 00:04:41.266 00:38:33 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.266 00:38:33 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:41.266 00:38:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:41.266 00:38:33 -- common/autotest_common.sh@10 -- # set +x 00:04:41.266 00:38:33 -- json_config/json_config.sh@328 -- # return 0 00:04:41.266 00:38:33 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:41.266 INFO: Success 00:04:41.266 00:04:41.266 real 0m15.190s 00:04:41.266 user 0m15.790s 00:04:41.266 sys 0m2.035s 00:04:41.266 00:38:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:41.266 00:38:33 -- common/autotest_common.sh@10 -- # set +x 00:04:41.266 ************************************ 00:04:41.266 END TEST json_config 00:04:41.266 ************************************ 00:04:41.266 00:38:33 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:41.266 00:38:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.266 00:38:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.266 00:38:33 -- common/autotest_common.sh@10 -- # set +x 00:04:41.266 ************************************ 00:04:41.266 START TEST json_config_extra_key 00:04:41.266 ************************************ 00:04:41.266 00:38:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:41.266 00:38:33 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.266 00:38:33 -- nvmf/common.sh@7 -- # uname -s 00:04:41.266 00:38:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.266 00:38:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.266 00:38:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.266 00:38:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.266 00:38:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.266 00:38:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.266 00:38:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.266 00:38:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.266 00:38:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.266 00:38:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.266 00:38:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:41.266 00:38:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:41.266 00:38:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.266 00:38:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.266 00:38:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.266 00:38:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.266 00:38:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.266 00:38:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.266 00:38:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.266 00:38:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.266 00:38:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.266 00:38:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.266 00:38:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.266 00:38:33 -- paths/export.sh@5 -- # export PATH 00:04:41.266 00:38:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.266 00:38:33 -- nvmf/common.sh@47 -- # : 0 00:04:41.266 00:38:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:41.266 00:38:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:41.266 00:38:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.266 00:38:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.266 00:38:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.266 00:38:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:41.266 00:38:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:41.266 00:38:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.266 00:38:33 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:41.266 00:38:33 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.266 00:38:33 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.266 00:38:33 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.266 00:38:33 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.266 00:38:33 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.266 00:38:33 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.267 00:38:33 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:41.267 00:38:33 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.267 00:38:33 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.267 00:38:33 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.267 INFO: launching applications... 00:04:41.267 00:38:33 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:41.267 00:38:33 -- json_config/common.sh@9 -- # local app=target 00:04:41.267 00:38:33 -- json_config/common.sh@10 -- # shift 00:04:41.267 00:38:33 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.267 00:38:33 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.267 00:38:33 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.267 00:38:33 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.267 00:38:33 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.267 00:38:33 -- json_config/common.sh@22 -- # app_pid["$app"]=1511018 00:04:41.267 00:38:33 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.267 Waiting for target to run... 00:04:41.267 00:38:33 -- json_config/common.sh@25 -- # waitforlisten 1511018 /var/tmp/spdk_tgt.sock 00:04:41.267 00:38:33 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:41.267 00:38:33 -- common/autotest_common.sh@817 -- # '[' -z 1511018 ']' 00:04:41.267 00:38:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.267 00:38:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:41.267 00:38:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.267 00:38:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:41.267 00:38:33 -- common/autotest_common.sh@10 -- # set +x 00:04:41.267 [2024-04-27 00:38:33.792795] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:41.267 [2024-04-27 00:38:33.792844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511018 ] 00:04:41.267 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.526 [2024-04-27 00:38:34.064048] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.526 [2024-04-27 00:38:34.129951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.093 00:38:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:42.093 00:38:34 -- common/autotest_common.sh@850 -- # return 0 00:04:42.093 00:38:34 -- json_config/common.sh@26 -- # echo '' 00:04:42.093 00:04:42.093 00:38:34 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:42.093 INFO: shutting down applications... 00:04:42.093 00:38:34 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:42.093 00:38:34 -- json_config/common.sh@31 -- # local app=target 00:04:42.093 00:38:34 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.093 00:38:34 -- json_config/common.sh@35 -- # [[ -n 1511018 ]] 00:04:42.093 00:38:34 -- json_config/common.sh@38 -- # kill -SIGINT 1511018 00:04:42.093 00:38:34 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.093 00:38:34 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.093 00:38:34 -- json_config/common.sh@41 -- # kill -0 1511018 00:04:42.093 00:38:34 -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.660 00:38:35 -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.660 00:38:35 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.660 00:38:35 -- json_config/common.sh@41 -- # kill -0 1511018 00:04:42.660 00:38:35 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:42.660 00:38:35 -- json_config/common.sh@43 -- # break 00:04:42.660 00:38:35 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:42.660 00:38:35 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:42.660 SPDK target shutdown done 00:04:42.660 00:38:35 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:42.660 Success 00:04:42.660 00:04:42.660 real 0m1.432s 00:04:42.660 user 0m1.251s 00:04:42.660 sys 0m0.352s 00:04:42.660 00:38:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:42.660 00:38:35 -- common/autotest_common.sh@10 -- # set +x 00:04:42.660 ************************************ 00:04:42.660 END TEST json_config_extra_key 00:04:42.660 ************************************ 00:04:42.660 00:38:35 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:42.660 00:38:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:42.660 00:38:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:42.660 00:38:35 -- common/autotest_common.sh@10 -- # set +x 00:04:42.660 ************************************ 00:04:42.660 START TEST alias_rpc 00:04:42.660 ************************************ 00:04:42.660 00:38:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:42.660 * Looking for test storage... 00:04:42.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:42.660 00:38:35 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:42.660 00:38:35 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1511314 00:04:42.660 00:38:35 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:42.660 00:38:35 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1511314 00:04:42.660 00:38:35 -- common/autotest_common.sh@817 -- # '[' -z 1511314 ']' 00:04:42.660 00:38:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.660 00:38:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:42.660 00:38:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.660 00:38:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:42.660 00:38:35 -- common/autotest_common.sh@10 -- # set +x 00:04:42.919 [2024-04-27 00:38:35.371398] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:42.919 [2024-04-27 00:38:35.371436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511314 ] 00:04:42.919 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.919 [2024-04-27 00:38:35.424600] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.919 [2024-04-27 00:38:35.495822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.485 00:38:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:43.485 00:38:36 -- common/autotest_common.sh@850 -- # return 0 00:04:43.485 00:38:36 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:43.744 00:38:36 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1511314 00:04:43.744 00:38:36 -- common/autotest_common.sh@936 -- # '[' -z 1511314 ']' 00:04:43.744 00:38:36 -- common/autotest_common.sh@940 -- # kill -0 1511314 00:04:43.744 00:38:36 -- common/autotest_common.sh@941 -- # uname 00:04:43.744 00:38:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:43.744 00:38:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1511314 00:04:43.744 00:38:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:43.744 00:38:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:43.744 00:38:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1511314' 00:04:43.744 killing process with pid 1511314 00:04:43.744 00:38:36 -- common/autotest_common.sh@955 -- # kill 1511314 00:04:43.744 00:38:36 -- common/autotest_common.sh@960 -- # wait 1511314 00:04:44.311 00:04:44.311 real 0m1.486s 00:04:44.311 user 0m1.623s 00:04:44.311 sys 0m0.380s 00:04:44.311 00:38:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:44.311 00:38:36 -- common/autotest_common.sh@10 -- # set +x 00:04:44.311 ************************************ 00:04:44.311 END TEST alias_rpc 00:04:44.311 ************************************ 00:04:44.311 00:38:36 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:44.311 00:38:36 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.311 00:38:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:44.311 00:38:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:44.311 00:38:36 -- common/autotest_common.sh@10 -- # set +x 00:04:44.311 ************************************ 00:04:44.311 START TEST spdkcli_tcp 00:04:44.311 ************************************ 00:04:44.311 00:38:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:44.311 * Looking for test storage... 00:04:44.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:44.311 00:38:36 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:44.311 00:38:36 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:44.311 00:38:36 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:44.311 00:38:36 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:44.311 00:38:36 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:44.311 00:38:36 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:44.311 00:38:36 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:44.311 00:38:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:44.311 00:38:36 -- common/autotest_common.sh@10 -- # set +x 00:04:44.311 00:38:36 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1511607 00:04:44.311 00:38:36 -- spdkcli/tcp.sh@27 -- # waitforlisten 1511607 00:04:44.311 00:38:36 -- common/autotest_common.sh@817 -- # '[' -z 1511607 ']' 00:04:44.311 00:38:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.311 00:38:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:44.311 00:38:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.311 00:38:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:44.311 00:38:36 -- common/autotest_common.sh@10 -- # set +x 00:04:44.311 00:38:36 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:44.570 [2024-04-27 00:38:37.036508] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:44.570 [2024-04-27 00:38:37.036551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511607 ] 00:04:44.570 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.570 [2024-04-27 00:38:37.091948] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.570 [2024-04-27 00:38:37.165355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.570 [2024-04-27 00:38:37.165356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.506 00:38:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:45.506 00:38:37 -- common/autotest_common.sh@850 -- # return 0 00:04:45.506 00:38:37 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:45.506 00:38:37 -- spdkcli/tcp.sh@31 -- # socat_pid=1511838 00:04:45.506 00:38:37 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:45.506 [ 00:04:45.506 "bdev_malloc_delete", 00:04:45.506 "bdev_malloc_create", 00:04:45.506 "bdev_null_resize", 00:04:45.506 "bdev_null_delete", 00:04:45.506 "bdev_null_create", 00:04:45.506 "bdev_nvme_cuse_unregister", 00:04:45.506 "bdev_nvme_cuse_register", 00:04:45.506 "bdev_opal_new_user", 00:04:45.506 "bdev_opal_set_lock_state", 00:04:45.506 "bdev_opal_delete", 00:04:45.506 "bdev_opal_get_info", 00:04:45.506 "bdev_opal_create", 00:04:45.506 "bdev_nvme_opal_revert", 00:04:45.506 "bdev_nvme_opal_init", 00:04:45.506 "bdev_nvme_send_cmd", 00:04:45.506 "bdev_nvme_get_path_iostat", 00:04:45.506 "bdev_nvme_get_mdns_discovery_info", 00:04:45.506 "bdev_nvme_stop_mdns_discovery", 00:04:45.506 "bdev_nvme_start_mdns_discovery", 00:04:45.506 "bdev_nvme_set_multipath_policy", 00:04:45.506 "bdev_nvme_set_preferred_path", 00:04:45.506 "bdev_nvme_get_io_paths", 00:04:45.506 "bdev_nvme_remove_error_injection", 00:04:45.506 "bdev_nvme_add_error_injection", 00:04:45.506 "bdev_nvme_get_discovery_info", 00:04:45.506 "bdev_nvme_stop_discovery", 00:04:45.506 "bdev_nvme_start_discovery", 00:04:45.506 "bdev_nvme_get_controller_health_info", 00:04:45.506 "bdev_nvme_disable_controller", 00:04:45.506 "bdev_nvme_enable_controller", 00:04:45.506 "bdev_nvme_reset_controller", 00:04:45.506 "bdev_nvme_get_transport_statistics", 00:04:45.506 "bdev_nvme_apply_firmware", 00:04:45.506 "bdev_nvme_detach_controller", 00:04:45.506 "bdev_nvme_get_controllers", 00:04:45.506 "bdev_nvme_attach_controller", 00:04:45.506 "bdev_nvme_set_hotplug", 00:04:45.506 "bdev_nvme_set_options", 00:04:45.506 "bdev_passthru_delete", 00:04:45.506 "bdev_passthru_create", 00:04:45.506 "bdev_lvol_grow_lvstore", 00:04:45.506 "bdev_lvol_get_lvols", 00:04:45.506 "bdev_lvol_get_lvstores", 00:04:45.506 "bdev_lvol_delete", 00:04:45.506 "bdev_lvol_set_read_only", 00:04:45.506 "bdev_lvol_resize", 00:04:45.506 "bdev_lvol_decouple_parent", 00:04:45.506 "bdev_lvol_inflate", 00:04:45.506 "bdev_lvol_rename", 00:04:45.506 "bdev_lvol_clone_bdev", 00:04:45.506 "bdev_lvol_clone", 00:04:45.506 "bdev_lvol_snapshot", 00:04:45.506 "bdev_lvol_create", 00:04:45.506 "bdev_lvol_delete_lvstore", 00:04:45.506 "bdev_lvol_rename_lvstore", 00:04:45.506 "bdev_lvol_create_lvstore", 00:04:45.506 "bdev_raid_set_options", 00:04:45.506 "bdev_raid_remove_base_bdev", 00:04:45.506 "bdev_raid_add_base_bdev", 00:04:45.506 "bdev_raid_delete", 00:04:45.506 "bdev_raid_create", 00:04:45.506 "bdev_raid_get_bdevs", 00:04:45.506 "bdev_error_inject_error", 00:04:45.506 "bdev_error_delete", 00:04:45.506 "bdev_error_create", 00:04:45.506 "bdev_split_delete", 00:04:45.506 "bdev_split_create", 00:04:45.506 "bdev_delay_delete", 00:04:45.506 "bdev_delay_create", 00:04:45.506 "bdev_delay_update_latency", 00:04:45.506 "bdev_zone_block_delete", 00:04:45.506 "bdev_zone_block_create", 00:04:45.506 "blobfs_create", 00:04:45.506 "blobfs_detect", 00:04:45.506 "blobfs_set_cache_size", 00:04:45.506 "bdev_aio_delete", 00:04:45.506 "bdev_aio_rescan", 00:04:45.506 "bdev_aio_create", 00:04:45.506 "bdev_ftl_set_property", 00:04:45.506 "bdev_ftl_get_properties", 00:04:45.506 "bdev_ftl_get_stats", 00:04:45.506 "bdev_ftl_unmap", 00:04:45.506 "bdev_ftl_unload", 00:04:45.506 "bdev_ftl_delete", 00:04:45.506 "bdev_ftl_load", 00:04:45.506 "bdev_ftl_create", 00:04:45.506 "bdev_virtio_attach_controller", 00:04:45.506 "bdev_virtio_scsi_get_devices", 00:04:45.506 "bdev_virtio_detach_controller", 00:04:45.506 "bdev_virtio_blk_set_hotplug", 00:04:45.506 "bdev_iscsi_delete", 00:04:45.506 "bdev_iscsi_create", 00:04:45.506 "bdev_iscsi_set_options", 00:04:45.506 "accel_error_inject_error", 00:04:45.506 "ioat_scan_accel_module", 00:04:45.506 "dsa_scan_accel_module", 00:04:45.506 "iaa_scan_accel_module", 00:04:45.506 "vfu_virtio_create_scsi_endpoint", 00:04:45.506 "vfu_virtio_scsi_remove_target", 00:04:45.506 "vfu_virtio_scsi_add_target", 00:04:45.506 "vfu_virtio_create_blk_endpoint", 00:04:45.506 "vfu_virtio_delete_endpoint", 00:04:45.506 "keyring_file_remove_key", 00:04:45.506 "keyring_file_add_key", 00:04:45.506 "iscsi_get_histogram", 00:04:45.506 "iscsi_enable_histogram", 00:04:45.506 "iscsi_set_options", 00:04:45.506 "iscsi_get_auth_groups", 00:04:45.506 "iscsi_auth_group_remove_secret", 00:04:45.506 "iscsi_auth_group_add_secret", 00:04:45.506 "iscsi_delete_auth_group", 00:04:45.506 "iscsi_create_auth_group", 00:04:45.506 "iscsi_set_discovery_auth", 00:04:45.506 "iscsi_get_options", 00:04:45.507 "iscsi_target_node_request_logout", 00:04:45.507 "iscsi_target_node_set_redirect", 00:04:45.507 "iscsi_target_node_set_auth", 00:04:45.507 "iscsi_target_node_add_lun", 00:04:45.507 "iscsi_get_stats", 00:04:45.507 "iscsi_get_connections", 00:04:45.507 "iscsi_portal_group_set_auth", 00:04:45.507 "iscsi_start_portal_group", 00:04:45.507 "iscsi_delete_portal_group", 00:04:45.507 "iscsi_create_portal_group", 00:04:45.507 "iscsi_get_portal_groups", 00:04:45.507 "iscsi_delete_target_node", 00:04:45.507 "iscsi_target_node_remove_pg_ig_maps", 00:04:45.507 "iscsi_target_node_add_pg_ig_maps", 00:04:45.507 "iscsi_create_target_node", 00:04:45.507 "iscsi_get_target_nodes", 00:04:45.507 "iscsi_delete_initiator_group", 00:04:45.507 "iscsi_initiator_group_remove_initiators", 00:04:45.507 "iscsi_initiator_group_add_initiators", 00:04:45.507 "iscsi_create_initiator_group", 00:04:45.507 "iscsi_get_initiator_groups", 00:04:45.507 "nvmf_set_crdt", 00:04:45.507 "nvmf_set_config", 00:04:45.507 "nvmf_set_max_subsystems", 00:04:45.507 "nvmf_subsystem_get_listeners", 00:04:45.507 "nvmf_subsystem_get_qpairs", 00:04:45.507 "nvmf_subsystem_get_controllers", 00:04:45.507 "nvmf_get_stats", 00:04:45.507 "nvmf_get_transports", 00:04:45.507 "nvmf_create_transport", 00:04:45.507 "nvmf_get_targets", 00:04:45.507 "nvmf_delete_target", 00:04:45.507 "nvmf_create_target", 00:04:45.507 "nvmf_subsystem_allow_any_host", 00:04:45.507 "nvmf_subsystem_remove_host", 00:04:45.507 "nvmf_subsystem_add_host", 00:04:45.507 "nvmf_ns_remove_host", 00:04:45.507 "nvmf_ns_add_host", 00:04:45.507 "nvmf_subsystem_remove_ns", 00:04:45.507 "nvmf_subsystem_add_ns", 00:04:45.507 "nvmf_subsystem_listener_set_ana_state", 00:04:45.507 "nvmf_discovery_get_referrals", 00:04:45.507 "nvmf_discovery_remove_referral", 00:04:45.507 "nvmf_discovery_add_referral", 00:04:45.507 "nvmf_subsystem_remove_listener", 00:04:45.507 "nvmf_subsystem_add_listener", 00:04:45.507 "nvmf_delete_subsystem", 00:04:45.507 "nvmf_create_subsystem", 00:04:45.507 "nvmf_get_subsystems", 00:04:45.507 "env_dpdk_get_mem_stats", 00:04:45.507 "nbd_get_disks", 00:04:45.507 "nbd_stop_disk", 00:04:45.507 "nbd_start_disk", 00:04:45.507 "ublk_recover_disk", 00:04:45.507 "ublk_get_disks", 00:04:45.507 "ublk_stop_disk", 00:04:45.507 "ublk_start_disk", 00:04:45.507 "ublk_destroy_target", 00:04:45.507 "ublk_create_target", 00:04:45.507 "virtio_blk_create_transport", 00:04:45.507 "virtio_blk_get_transports", 00:04:45.507 "vhost_controller_set_coalescing", 00:04:45.507 "vhost_get_controllers", 00:04:45.507 "vhost_delete_controller", 00:04:45.507 "vhost_create_blk_controller", 00:04:45.507 "vhost_scsi_controller_remove_target", 00:04:45.507 "vhost_scsi_controller_add_target", 00:04:45.507 "vhost_start_scsi_controller", 00:04:45.507 "vhost_create_scsi_controller", 00:04:45.507 "thread_set_cpumask", 00:04:45.507 "framework_get_scheduler", 00:04:45.507 "framework_set_scheduler", 00:04:45.507 "framework_get_reactors", 00:04:45.507 "thread_get_io_channels", 00:04:45.507 "thread_get_pollers", 00:04:45.507 "thread_get_stats", 00:04:45.507 "framework_monitor_context_switch", 00:04:45.507 "spdk_kill_instance", 00:04:45.507 "log_enable_timestamps", 00:04:45.507 "log_get_flags", 00:04:45.507 "log_clear_flag", 00:04:45.507 "log_set_flag", 00:04:45.507 "log_get_level", 00:04:45.507 "log_set_level", 00:04:45.507 "log_get_print_level", 00:04:45.507 "log_set_print_level", 00:04:45.507 "framework_enable_cpumask_locks", 00:04:45.507 "framework_disable_cpumask_locks", 00:04:45.507 "framework_wait_init", 00:04:45.507 "framework_start_init", 00:04:45.507 "scsi_get_devices", 00:04:45.507 "bdev_get_histogram", 00:04:45.507 "bdev_enable_histogram", 00:04:45.507 "bdev_set_qos_limit", 00:04:45.507 "bdev_set_qd_sampling_period", 00:04:45.507 "bdev_get_bdevs", 00:04:45.507 "bdev_reset_iostat", 00:04:45.507 "bdev_get_iostat", 00:04:45.507 "bdev_examine", 00:04:45.507 "bdev_wait_for_examine", 00:04:45.507 "bdev_set_options", 00:04:45.507 "notify_get_notifications", 00:04:45.507 "notify_get_types", 00:04:45.507 "accel_get_stats", 00:04:45.507 "accel_set_options", 00:04:45.507 "accel_set_driver", 00:04:45.507 "accel_crypto_key_destroy", 00:04:45.507 "accel_crypto_keys_get", 00:04:45.507 "accel_crypto_key_create", 00:04:45.507 "accel_assign_opc", 00:04:45.507 "accel_get_module_info", 00:04:45.507 "accel_get_opc_assignments", 00:04:45.507 "vmd_rescan", 00:04:45.507 "vmd_remove_device", 00:04:45.507 "vmd_enable", 00:04:45.507 "sock_get_default_impl", 00:04:45.507 "sock_set_default_impl", 00:04:45.507 "sock_impl_set_options", 00:04:45.507 "sock_impl_get_options", 00:04:45.507 "iobuf_get_stats", 00:04:45.507 "iobuf_set_options", 00:04:45.507 "keyring_get_keys", 00:04:45.507 "framework_get_pci_devices", 00:04:45.507 "framework_get_config", 00:04:45.507 "framework_get_subsystems", 00:04:45.507 "vfu_tgt_set_base_path", 00:04:45.507 "trace_get_info", 00:04:45.507 "trace_get_tpoint_group_mask", 00:04:45.507 "trace_disable_tpoint_group", 00:04:45.507 "trace_enable_tpoint_group", 00:04:45.507 "trace_clear_tpoint_mask", 00:04:45.507 "trace_set_tpoint_mask", 00:04:45.507 "spdk_get_version", 00:04:45.507 "rpc_get_methods" 00:04:45.507 ] 00:04:45.507 00:38:38 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:45.507 00:38:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:45.507 00:38:38 -- common/autotest_common.sh@10 -- # set +x 00:04:45.507 00:38:38 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:45.507 00:38:38 -- spdkcli/tcp.sh@38 -- # killprocess 1511607 00:04:45.507 00:38:38 -- common/autotest_common.sh@936 -- # '[' -z 1511607 ']' 00:04:45.507 00:38:38 -- common/autotest_common.sh@940 -- # kill -0 1511607 00:04:45.507 00:38:38 -- common/autotest_common.sh@941 -- # uname 00:04:45.507 00:38:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:45.507 00:38:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1511607 00:04:45.507 00:38:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:45.507 00:38:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:45.507 00:38:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1511607' 00:04:45.507 killing process with pid 1511607 00:04:45.507 00:38:38 -- common/autotest_common.sh@955 -- # kill 1511607 00:04:45.507 00:38:38 -- common/autotest_common.sh@960 -- # wait 1511607 00:04:45.765 00:04:45.765 real 0m1.510s 00:04:45.765 user 0m2.797s 00:04:45.765 sys 0m0.427s 00:04:45.765 00:38:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:45.765 00:38:38 -- common/autotest_common.sh@10 -- # set +x 00:04:45.765 ************************************ 00:04:45.765 END TEST spdkcli_tcp 00:04:45.765 ************************************ 00:04:45.765 00:38:38 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.765 00:38:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.765 00:38:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.765 00:38:38 -- common/autotest_common.sh@10 -- # set +x 00:04:46.024 ************************************ 00:04:46.024 START TEST dpdk_mem_utility 00:04:46.024 ************************************ 00:04:46.024 00:38:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.024 * Looking for test storage... 00:04:46.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:46.024 00:38:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:46.024 00:38:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1512125 00:04:46.024 00:38:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1512125 00:04:46.024 00:38:38 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:46.024 00:38:38 -- common/autotest_common.sh@817 -- # '[' -z 1512125 ']' 00:04:46.024 00:38:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.024 00:38:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:46.024 00:38:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.024 00:38:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:46.024 00:38:38 -- common/autotest_common.sh@10 -- # set +x 00:04:46.283 [2024-04-27 00:38:38.736801] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:46.283 [2024-04-27 00:38:38.736846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512125 ] 00:04:46.283 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.283 [2024-04-27 00:38:38.789431] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.283 [2024-04-27 00:38:38.859499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.848 00:38:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:46.848 00:38:39 -- common/autotest_common.sh@850 -- # return 0 00:04:46.848 00:38:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:46.848 00:38:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:46.848 00:38:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:46.848 00:38:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.107 { 00:04:47.107 "filename": "/tmp/spdk_mem_dump.txt" 00:04:47.107 } 00:04:47.107 00:38:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:47.107 00:38:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:47.107 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:47.107 1 heaps totaling size 814.000000 MiB 00:04:47.107 size: 814.000000 MiB heap id: 0 00:04:47.107 end heaps---------- 00:04:47.107 8 mempools totaling size 598.116089 MiB 00:04:47.107 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:47.107 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:47.107 size: 84.521057 MiB name: bdev_io_1512125 00:04:47.107 size: 51.011292 MiB name: evtpool_1512125 00:04:47.107 size: 50.003479 MiB name: msgpool_1512125 00:04:47.107 size: 21.763794 MiB name: PDU_Pool 00:04:47.107 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:47.107 size: 0.026123 MiB name: Session_Pool 00:04:47.107 end mempools------- 00:04:47.107 6 memzones totaling size 4.142822 MiB 00:04:47.107 size: 1.000366 MiB name: RG_ring_0_1512125 00:04:47.107 size: 1.000366 MiB name: RG_ring_1_1512125 00:04:47.107 size: 1.000366 MiB name: RG_ring_4_1512125 00:04:47.107 size: 1.000366 MiB name: RG_ring_5_1512125 00:04:47.107 size: 0.125366 MiB name: RG_ring_2_1512125 00:04:47.107 size: 0.015991 MiB name: RG_ring_3_1512125 00:04:47.107 end memzones------- 00:04:47.107 00:38:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:47.107 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:47.107 list of free elements. size: 12.519348 MiB 00:04:47.107 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:47.107 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:47.107 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:47.107 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:47.107 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:47.107 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:47.107 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:47.107 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:47.107 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:47.107 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:47.107 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:47.107 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:47.107 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:47.107 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:47.107 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:47.107 list of standard malloc elements. size: 199.218079 MiB 00:04:47.107 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:47.107 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:47.107 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:47.107 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:47.107 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:47.107 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:47.107 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:47.107 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:47.107 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:47.107 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:47.107 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:47.107 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:47.107 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:47.107 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:47.107 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:47.107 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:47.107 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:47.107 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:47.107 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:47.107 list of memzone associated elements. size: 602.262573 MiB 00:04:47.107 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:47.107 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:47.107 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:47.107 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:47.107 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:47.107 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1512125_0 00:04:47.107 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:47.107 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1512125_0 00:04:47.107 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:47.107 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1512125_0 00:04:47.107 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:47.107 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:47.107 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:47.107 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:47.107 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:47.107 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1512125 00:04:47.107 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:47.107 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1512125 00:04:47.107 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:47.107 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1512125 00:04:47.107 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:47.107 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:47.107 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:47.107 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:47.107 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:47.107 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:47.107 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:47.107 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:47.107 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:47.107 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1512125 00:04:47.107 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:47.107 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1512125 00:04:47.107 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:47.107 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1512125 00:04:47.107 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:47.107 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1512125 00:04:47.107 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:47.107 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1512125 00:04:47.107 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:47.107 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:47.107 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:47.108 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:47.108 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:47.108 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:47.108 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:47.108 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1512125 00:04:47.108 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:47.108 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:47.108 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:47.108 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:47.108 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:47.108 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1512125 00:04:47.108 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:47.108 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:47.108 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:47.108 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1512125 00:04:47.108 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:47.108 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1512125 00:04:47.108 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:47.108 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:47.108 00:38:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:47.108 00:38:39 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1512125 00:04:47.108 00:38:39 -- common/autotest_common.sh@936 -- # '[' -z 1512125 ']' 00:04:47.108 00:38:39 -- common/autotest_common.sh@940 -- # kill -0 1512125 00:04:47.108 00:38:39 -- common/autotest_common.sh@941 -- # uname 00:04:47.108 00:38:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:47.108 00:38:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1512125 00:04:47.108 00:38:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:47.108 00:38:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:47.108 00:38:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1512125' 00:04:47.108 killing process with pid 1512125 00:04:47.108 00:38:39 -- common/autotest_common.sh@955 -- # kill 1512125 00:04:47.108 00:38:39 -- common/autotest_common.sh@960 -- # wait 1512125 00:04:47.365 00:04:47.365 real 0m1.422s 00:04:47.365 user 0m1.490s 00:04:47.365 sys 0m0.392s 00:04:47.365 00:38:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:47.365 00:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.365 ************************************ 00:04:47.365 END TEST dpdk_mem_utility 00:04:47.365 ************************************ 00:04:47.365 00:38:40 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:47.365 00:38:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.365 00:38:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.365 00:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.622 ************************************ 00:04:47.622 START TEST event 00:04:47.622 ************************************ 00:04:47.622 00:38:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:47.622 * Looking for test storage... 00:04:47.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:47.622 00:38:40 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:47.622 00:38:40 -- bdev/nbd_common.sh@6 -- # set -e 00:04:47.622 00:38:40 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.622 00:38:40 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:04:47.623 00:38:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.623 00:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.880 ************************************ 00:04:47.880 START TEST event_perf 00:04:47.880 ************************************ 00:04:47.880 00:38:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.880 Running I/O for 1 seconds...[2024-04-27 00:38:40.427614] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:47.880 [2024-04-27 00:38:40.427685] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512441 ] 00:04:47.880 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.880 [2024-04-27 00:38:40.486435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:47.880 [2024-04-27 00:38:40.559200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.880 [2024-04-27 00:38:40.559299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:47.880 [2024-04-27 00:38:40.559389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:47.880 [2024-04-27 00:38:40.559390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.253 Running I/O for 1 seconds... 00:04:49.253 lcore 0: 204767 00:04:49.253 lcore 1: 204766 00:04:49.253 lcore 2: 204767 00:04:49.253 lcore 3: 204768 00:04:49.253 done. 00:04:49.253 00:04:49.253 real 0m1.241s 00:04:49.253 user 0m4.166s 00:04:49.253 sys 0m0.072s 00:04:49.253 00:38:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.253 00:38:41 -- common/autotest_common.sh@10 -- # set +x 00:04:49.253 ************************************ 00:04:49.253 END TEST event_perf 00:04:49.253 ************************************ 00:04:49.253 00:38:41 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:49.253 00:38:41 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:49.253 00:38:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.253 00:38:41 -- common/autotest_common.sh@10 -- # set +x 00:04:49.253 ************************************ 00:04:49.253 START TEST event_reactor 00:04:49.253 ************************************ 00:04:49.253 00:38:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:49.253 [2024-04-27 00:38:41.841331] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:49.253 [2024-04-27 00:38:41.841410] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512698 ] 00:04:49.253 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.253 [2024-04-27 00:38:41.902176] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.511 [2024-04-27 00:38:41.978897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.445 test_start 00:04:50.445 oneshot 00:04:50.445 tick 100 00:04:50.445 tick 100 00:04:50.445 tick 250 00:04:50.445 tick 100 00:04:50.445 tick 100 00:04:50.445 tick 100 00:04:50.445 tick 250 00:04:50.445 tick 500 00:04:50.445 tick 100 00:04:50.445 tick 100 00:04:50.445 tick 250 00:04:50.445 tick 100 00:04:50.445 tick 100 00:04:50.445 test_end 00:04:50.445 00:04:50.445 real 0m1.248s 00:04:50.445 user 0m1.163s 00:04:50.445 sys 0m0.081s 00:04:50.445 00:38:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.445 00:38:43 -- common/autotest_common.sh@10 -- # set +x 00:04:50.445 ************************************ 00:04:50.445 END TEST event_reactor 00:04:50.445 ************************************ 00:04:50.445 00:38:43 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.445 00:38:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:04:50.445 00:38:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.445 00:38:43 -- common/autotest_common.sh@10 -- # set +x 00:04:50.703 ************************************ 00:04:50.703 START TEST event_reactor_perf 00:04:50.703 ************************************ 00:04:50.703 00:38:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.703 [2024-04-27 00:38:43.230983] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:50.703 [2024-04-27 00:38:43.231047] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512955 ] 00:04:50.703 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.703 [2024-04-27 00:38:43.288262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.703 [2024-04-27 00:38:43.357847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.078 test_start 00:04:52.078 test_end 00:04:52.079 Performance: 498534 events per second 00:04:52.079 00:04:52.079 real 0m1.234s 00:04:52.079 user 0m1.154s 00:04:52.079 sys 0m0.076s 00:04:52.079 00:38:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:52.079 00:38:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.079 ************************************ 00:04:52.079 END TEST event_reactor_perf 00:04:52.079 ************************************ 00:04:52.079 00:38:44 -- event/event.sh@49 -- # uname -s 00:04:52.079 00:38:44 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.079 00:38:44 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:52.079 00:38:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.079 00:38:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.079 00:38:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.079 ************************************ 00:04:52.079 START TEST event_scheduler 00:04:52.079 ************************************ 00:04:52.079 00:38:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:52.079 * Looking for test storage... 00:04:52.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:52.079 00:38:44 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.079 00:38:44 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1513244 00:04:52.079 00:38:44 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.079 00:38:44 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.079 00:38:44 -- scheduler/scheduler.sh@37 -- # waitforlisten 1513244 00:04:52.079 00:38:44 -- common/autotest_common.sh@817 -- # '[' -z 1513244 ']' 00:04:52.079 00:38:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.079 00:38:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:52.079 00:38:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.079 00:38:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:52.079 00:38:44 -- common/autotest_common.sh@10 -- # set +x 00:04:52.079 [2024-04-27 00:38:44.726203] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:52.079 [2024-04-27 00:38:44.726263] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513244 ] 00:04:52.079 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.337 [2024-04-27 00:38:44.778623] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.337 [2024-04-27 00:38:44.858394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.337 [2024-04-27 00:38:44.858478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.337 [2024-04-27 00:38:44.858583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.337 [2024-04-27 00:38:44.858585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:52.904 00:38:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:52.904 00:38:45 -- common/autotest_common.sh@850 -- # return 0 00:04:52.904 00:38:45 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:52.904 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.904 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:52.904 POWER: Env isn't set yet! 00:04:52.904 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:52.904 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:52.904 POWER: Cannot set governor of lcore 0 to userspace 00:04:52.904 POWER: Attempting to initialise PSTAT power management... 00:04:52.904 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:52.904 POWER: Initialized successfully for lcore 0 power management 00:04:52.904 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:52.904 POWER: Initialized successfully for lcore 1 power management 00:04:52.904 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:52.904 POWER: Initialized successfully for lcore 2 power management 00:04:52.904 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:52.904 POWER: Initialized successfully for lcore 3 power management 00:04:52.904 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:52.904 00:38:45 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:52.904 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:52.904 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.163 [2024-04-27 00:38:45.649947] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:53.163 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.163 00:38:45 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:53.163 00:38:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.163 00:38:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.163 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.163 ************************************ 00:04:53.163 START TEST scheduler_create_thread 00:04:53.163 ************************************ 00:04:53.163 00:38:45 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:04:53.163 00:38:45 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:53.163 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.163 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.163 2 00:04:53.163 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.163 00:38:45 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:53.163 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.163 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.163 3 00:04:53.163 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.163 00:38:45 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:53.163 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.163 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.163 4 00:04:53.163 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.163 00:38:45 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:53.163 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.163 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.163 5 00:04:53.163 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.163 00:38:45 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:53.163 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.163 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.163 6 00:04:53.163 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.163 00:38:45 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:53.163 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.163 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.422 7 00:04:53.422 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.422 00:38:45 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:53.422 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.422 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.422 8 00:04:53.422 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.422 00:38:45 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:53.422 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.422 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.422 9 00:04:53.422 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.422 00:38:45 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:53.422 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.422 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.422 10 00:04:53.422 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.422 00:38:45 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:53.422 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.422 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:53.422 00:38:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:53.422 00:38:45 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:53.422 00:38:45 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:53.422 00:38:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:53.422 00:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.356 00:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:54.356 00:38:46 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:54.356 00:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:54.356 00:38:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.731 00:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:55.731 00:38:48 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:55.731 00:38:48 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:55.731 00:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:55.731 00:38:48 -- common/autotest_common.sh@10 -- # set +x 00:04:56.665 00:38:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:56.665 00:04:56.665 real 0m3.383s 00:04:56.665 user 0m0.023s 00:04:56.665 sys 0m0.005s 00:04:56.665 00:38:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.665 00:38:49 -- common/autotest_common.sh@10 -- # set +x 00:04:56.665 ************************************ 00:04:56.665 END TEST scheduler_create_thread 00:04:56.665 ************************************ 00:04:56.665 00:38:49 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:56.665 00:38:49 -- scheduler/scheduler.sh@46 -- # killprocess 1513244 00:04:56.665 00:38:49 -- common/autotest_common.sh@936 -- # '[' -z 1513244 ']' 00:04:56.665 00:38:49 -- common/autotest_common.sh@940 -- # kill -0 1513244 00:04:56.665 00:38:49 -- common/autotest_common.sh@941 -- # uname 00:04:56.665 00:38:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:56.665 00:38:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1513244 00:04:56.665 00:38:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:04:56.665 00:38:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:04:56.665 00:38:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1513244' 00:04:56.665 killing process with pid 1513244 00:04:56.665 00:38:49 -- common/autotest_common.sh@955 -- # kill 1513244 00:04:56.665 00:38:49 -- common/autotest_common.sh@960 -- # wait 1513244 00:04:56.923 [2024-04-27 00:38:49.550305] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:57.183 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:57.183 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:57.183 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:57.183 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:57.183 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:57.183 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:57.183 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:57.183 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:57.183 00:04:57.183 real 0m5.214s 00:04:57.183 user 0m10.754s 00:04:57.183 sys 0m0.441s 00:04:57.183 00:38:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.183 00:38:49 -- common/autotest_common.sh@10 -- # set +x 00:04:57.183 ************************************ 00:04:57.183 END TEST event_scheduler 00:04:57.183 ************************************ 00:04:57.183 00:38:49 -- event/event.sh@51 -- # modprobe -n nbd 00:04:57.183 00:38:49 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:57.183 00:38:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.183 00:38:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.183 00:38:49 -- common/autotest_common.sh@10 -- # set +x 00:04:57.442 ************************************ 00:04:57.442 START TEST app_repeat 00:04:57.442 ************************************ 00:04:57.442 00:38:49 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:04:57.442 00:38:49 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.442 00:38:49 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.442 00:38:49 -- event/event.sh@13 -- # local nbd_list 00:04:57.442 00:38:49 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.442 00:38:49 -- event/event.sh@14 -- # local bdev_list 00:04:57.442 00:38:49 -- event/event.sh@15 -- # local repeat_times=4 00:04:57.442 00:38:49 -- event/event.sh@17 -- # modprobe nbd 00:04:57.442 00:38:49 -- event/event.sh@19 -- # repeat_pid=1514225 00:04:57.442 00:38:49 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.442 00:38:49 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:57.442 00:38:49 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1514225' 00:04:57.442 Process app_repeat pid: 1514225 00:04:57.442 00:38:49 -- event/event.sh@23 -- # for i in {0..2} 00:04:57.442 00:38:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:57.442 spdk_app_start Round 0 00:04:57.442 00:38:49 -- event/event.sh@25 -- # waitforlisten 1514225 /var/tmp/spdk-nbd.sock 00:04:57.442 00:38:49 -- common/autotest_common.sh@817 -- # '[' -z 1514225 ']' 00:04:57.442 00:38:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.442 00:38:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:57.442 00:38:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.442 00:38:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:57.442 00:38:49 -- common/autotest_common.sh@10 -- # set +x 00:04:57.442 [2024-04-27 00:38:50.023082] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:57.442 [2024-04-27 00:38:50.023143] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514225 ] 00:04:57.442 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.442 [2024-04-27 00:38:50.083850] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.700 [2024-04-27 00:38:50.162322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.700 [2024-04-27 00:38:50.162327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.266 00:38:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:58.266 00:38:50 -- common/autotest_common.sh@850 -- # return 0 00:04:58.266 00:38:50 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.524 Malloc0 00:04:58.524 00:38:51 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.524 Malloc1 00:04:58.783 00:38:51 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@12 -- # local i 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.783 /dev/nbd0 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.783 00:38:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:04:58.783 00:38:51 -- common/autotest_common.sh@855 -- # local i 00:04:58.783 00:38:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:58.783 00:38:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:58.783 00:38:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:04:58.783 00:38:51 -- common/autotest_common.sh@859 -- # break 00:04:58.783 00:38:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:58.783 00:38:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:58.783 00:38:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.783 1+0 records in 00:04:58.783 1+0 records out 00:04:58.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180788 s, 22.7 MB/s 00:04:58.783 00:38:51 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.783 00:38:51 -- common/autotest_common.sh@872 -- # size=4096 00:04:58.783 00:38:51 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.783 00:38:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:58.783 00:38:51 -- common/autotest_common.sh@875 -- # return 0 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.783 00:38:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.041 /dev/nbd1 00:04:59.041 00:38:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.041 00:38:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.041 00:38:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:04:59.041 00:38:51 -- common/autotest_common.sh@855 -- # local i 00:04:59.041 00:38:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:04:59.041 00:38:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:04:59.041 00:38:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:04:59.041 00:38:51 -- common/autotest_common.sh@859 -- # break 00:04:59.041 00:38:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:04:59.041 00:38:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:04:59.041 00:38:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.041 1+0 records in 00:04:59.041 1+0 records out 00:04:59.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222321 s, 18.4 MB/s 00:04:59.041 00:38:51 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.041 00:38:51 -- common/autotest_common.sh@872 -- # size=4096 00:04:59.041 00:38:51 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.041 00:38:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:04:59.041 00:38:51 -- common/autotest_common.sh@875 -- # return 0 00:04:59.041 00:38:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.041 00:38:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.041 00:38:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.041 00:38:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.041 00:38:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.300 { 00:04:59.300 "nbd_device": "/dev/nbd0", 00:04:59.300 "bdev_name": "Malloc0" 00:04:59.300 }, 00:04:59.300 { 00:04:59.300 "nbd_device": "/dev/nbd1", 00:04:59.300 "bdev_name": "Malloc1" 00:04:59.300 } 00:04:59.300 ]' 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.300 { 00:04:59.300 "nbd_device": "/dev/nbd0", 00:04:59.300 "bdev_name": "Malloc0" 00:04:59.300 }, 00:04:59.300 { 00:04:59.300 "nbd_device": "/dev/nbd1", 00:04:59.300 "bdev_name": "Malloc1" 00:04:59.300 } 00:04:59.300 ]' 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.300 /dev/nbd1' 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.300 /dev/nbd1' 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.300 256+0 records in 00:04:59.300 256+0 records out 00:04:59.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00991624 s, 106 MB/s 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.300 256+0 records in 00:04:59.300 256+0 records out 00:04:59.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133141 s, 78.8 MB/s 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.300 256+0 records in 00:04:59.300 256+0 records out 00:04:59.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145634 s, 72.0 MB/s 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@51 -- # local i 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.300 00:38:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.558 00:38:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.558 00:38:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.558 00:38:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.558 00:38:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.558 00:38:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.558 00:38:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.558 00:38:52 -- bdev/nbd_common.sh@41 -- # break 00:04:59.558 00:38:52 -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.558 00:38:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.558 00:38:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@41 -- # break 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.816 00:38:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@65 -- # true 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.075 00:38:52 -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.075 00:38:52 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.075 00:38:52 -- event/event.sh@35 -- # sleep 3 00:05:00.333 [2024-04-27 00:38:52.956898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.333 [2024-04-27 00:38:53.022124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.333 [2024-04-27 00:38:53.022126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.591 [2024-04-27 00:38:53.064081] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.591 [2024-04-27 00:38:53.064123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.126 00:38:55 -- event/event.sh@23 -- # for i in {0..2} 00:05:03.126 00:38:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:03.126 spdk_app_start Round 1 00:05:03.126 00:38:55 -- event/event.sh@25 -- # waitforlisten 1514225 /var/tmp/spdk-nbd.sock 00:05:03.126 00:38:55 -- common/autotest_common.sh@817 -- # '[' -z 1514225 ']' 00:05:03.126 00:38:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.126 00:38:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:03.126 00:38:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.126 00:38:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:03.126 00:38:55 -- common/autotest_common.sh@10 -- # set +x 00:05:03.385 00:38:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:03.385 00:38:55 -- common/autotest_common.sh@850 -- # return 0 00:05:03.385 00:38:55 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.643 Malloc0 00:05:03.643 00:38:56 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.643 Malloc1 00:05:03.643 00:38:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@12 -- # local i 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.643 00:38:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.901 /dev/nbd0 00:05:03.901 00:38:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.901 00:38:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.901 00:38:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:03.901 00:38:56 -- common/autotest_common.sh@855 -- # local i 00:05:03.901 00:38:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:03.901 00:38:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:03.901 00:38:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:03.901 00:38:56 -- common/autotest_common.sh@859 -- # break 00:05:03.901 00:38:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:03.901 00:38:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:03.901 00:38:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.901 1+0 records in 00:05:03.901 1+0 records out 00:05:03.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193148 s, 21.2 MB/s 00:05:03.901 00:38:56 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.901 00:38:56 -- common/autotest_common.sh@872 -- # size=4096 00:05:03.901 00:38:56 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.901 00:38:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:03.901 00:38:56 -- common/autotest_common.sh@875 -- # return 0 00:05:03.901 00:38:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.901 00:38:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.901 00:38:56 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.158 /dev/nbd1 00:05:04.158 00:38:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.158 00:38:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.158 00:38:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:04.158 00:38:56 -- common/autotest_common.sh@855 -- # local i 00:05:04.158 00:38:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:04.158 00:38:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:04.158 00:38:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:04.158 00:38:56 -- common/autotest_common.sh@859 -- # break 00:05:04.158 00:38:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:04.158 00:38:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:04.158 00:38:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.158 1+0 records in 00:05:04.158 1+0 records out 00:05:04.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173932 s, 23.5 MB/s 00:05:04.158 00:38:56 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.158 00:38:56 -- common/autotest_common.sh@872 -- # size=4096 00:05:04.158 00:38:56 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.158 00:38:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:04.158 00:38:56 -- common/autotest_common.sh@875 -- # return 0 00:05:04.158 00:38:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.158 00:38:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.158 00:38:56 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.158 00:38:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.158 00:38:56 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.416 00:38:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.416 { 00:05:04.416 "nbd_device": "/dev/nbd0", 00:05:04.416 "bdev_name": "Malloc0" 00:05:04.416 }, 00:05:04.416 { 00:05:04.416 "nbd_device": "/dev/nbd1", 00:05:04.416 "bdev_name": "Malloc1" 00:05:04.416 } 00:05:04.416 ]' 00:05:04.416 00:38:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.416 { 00:05:04.416 "nbd_device": "/dev/nbd0", 00:05:04.416 "bdev_name": "Malloc0" 00:05:04.416 }, 00:05:04.416 { 00:05:04.416 "nbd_device": "/dev/nbd1", 00:05:04.416 "bdev_name": "Malloc1" 00:05:04.416 } 00:05:04.417 ]' 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.417 /dev/nbd1' 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.417 /dev/nbd1' 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.417 256+0 records in 00:05:04.417 256+0 records out 00:05:04.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103005 s, 102 MB/s 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.417 256+0 records in 00:05:04.417 256+0 records out 00:05:04.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138701 s, 75.6 MB/s 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.417 256+0 records in 00:05:04.417 256+0 records out 00:05:04.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146643 s, 71.5 MB/s 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@51 -- # local i 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.417 00:38:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@41 -- # break 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@41 -- # break 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.733 00:38:57 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@65 -- # true 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.991 00:38:57 -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.991 00:38:57 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.249 00:38:57 -- event/event.sh@35 -- # sleep 3 00:05:05.507 [2024-04-27 00:38:58.004718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.507 [2024-04-27 00:38:58.069939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.507 [2024-04-27 00:38:58.069941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.507 [2024-04-27 00:38:58.111820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.507 [2024-04-27 00:38:58.111868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.129 00:39:00 -- event/event.sh@23 -- # for i in {0..2} 00:05:08.129 00:39:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:08.129 spdk_app_start Round 2 00:05:08.129 00:39:00 -- event/event.sh@25 -- # waitforlisten 1514225 /var/tmp/spdk-nbd.sock 00:05:08.129 00:39:00 -- common/autotest_common.sh@817 -- # '[' -z 1514225 ']' 00:05:08.129 00:39:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.129 00:39:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:08.129 00:39:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.129 00:39:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:08.129 00:39:00 -- common/autotest_common.sh@10 -- # set +x 00:05:08.387 00:39:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:08.387 00:39:00 -- common/autotest_common.sh@850 -- # return 0 00:05:08.387 00:39:00 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.646 Malloc0 00:05:08.646 00:39:01 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.646 Malloc1 00:05:08.646 00:39:01 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@12 -- # local i 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.646 00:39:01 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.903 /dev/nbd0 00:05:08.903 00:39:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.903 00:39:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.903 00:39:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:08.903 00:39:01 -- common/autotest_common.sh@855 -- # local i 00:05:08.903 00:39:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:08.903 00:39:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:08.903 00:39:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:08.903 00:39:01 -- common/autotest_common.sh@859 -- # break 00:05:08.903 00:39:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:08.903 00:39:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:08.903 00:39:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.903 1+0 records in 00:05:08.903 1+0 records out 00:05:08.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188014 s, 21.8 MB/s 00:05:08.903 00:39:01 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.903 00:39:01 -- common/autotest_common.sh@872 -- # size=4096 00:05:08.903 00:39:01 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.903 00:39:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:08.903 00:39:01 -- common/autotest_common.sh@875 -- # return 0 00:05:08.903 00:39:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.903 00:39:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.903 00:39:01 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.161 /dev/nbd1 00:05:09.161 00:39:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.161 00:39:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.161 00:39:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:09.161 00:39:01 -- common/autotest_common.sh@855 -- # local i 00:05:09.161 00:39:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:09.161 00:39:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:09.162 00:39:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:09.162 00:39:01 -- common/autotest_common.sh@859 -- # break 00:05:09.162 00:39:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:09.162 00:39:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:09.162 00:39:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.162 1+0 records in 00:05:09.162 1+0 records out 00:05:09.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018074 s, 22.7 MB/s 00:05:09.162 00:39:01 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.162 00:39:01 -- common/autotest_common.sh@872 -- # size=4096 00:05:09.162 00:39:01 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.162 00:39:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:09.162 00:39:01 -- common/autotest_common.sh@875 -- # return 0 00:05:09.162 00:39:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.162 00:39:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.162 00:39:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.162 00:39:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.162 00:39:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.420 00:39:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.420 { 00:05:09.420 "nbd_device": "/dev/nbd0", 00:05:09.420 "bdev_name": "Malloc0" 00:05:09.421 }, 00:05:09.421 { 00:05:09.421 "nbd_device": "/dev/nbd1", 00:05:09.421 "bdev_name": "Malloc1" 00:05:09.421 } 00:05:09.421 ]' 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.421 { 00:05:09.421 "nbd_device": "/dev/nbd0", 00:05:09.421 "bdev_name": "Malloc0" 00:05:09.421 }, 00:05:09.421 { 00:05:09.421 "nbd_device": "/dev/nbd1", 00:05:09.421 "bdev_name": "Malloc1" 00:05:09.421 } 00:05:09.421 ]' 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.421 /dev/nbd1' 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.421 /dev/nbd1' 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.421 256+0 records in 00:05:09.421 256+0 records out 00:05:09.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102725 s, 102 MB/s 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.421 256+0 records in 00:05:09.421 256+0 records out 00:05:09.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139319 s, 75.3 MB/s 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.421 256+0 records in 00:05:09.421 256+0 records out 00:05:09.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150532 s, 69.7 MB/s 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.421 00:39:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.421 00:39:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.421 00:39:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.421 00:39:02 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:09.421 00:39:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.421 00:39:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.421 00:39:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.421 00:39:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.421 00:39:02 -- bdev/nbd_common.sh@51 -- # local i 00:05:09.421 00:39:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.421 00:39:02 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.679 00:39:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.679 00:39:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.679 00:39:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.679 00:39:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.679 00:39:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.679 00:39:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.679 00:39:02 -- bdev/nbd_common.sh@41 -- # break 00:05:09.679 00:39:02 -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.679 00:39:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.679 00:39:02 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@41 -- # break 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@65 -- # true 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.938 00:39:02 -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.938 00:39:02 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.196 00:39:02 -- event/event.sh@35 -- # sleep 3 00:05:10.455 [2024-04-27 00:39:03.020533] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.455 [2024-04-27 00:39:03.087366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.455 [2024-04-27 00:39:03.087368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.455 [2024-04-27 00:39:03.129519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.455 [2024-04-27 00:39:03.129563] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.764 00:39:05 -- event/event.sh@38 -- # waitforlisten 1514225 /var/tmp/spdk-nbd.sock 00:05:13.764 00:39:05 -- common/autotest_common.sh@817 -- # '[' -z 1514225 ']' 00:05:13.764 00:39:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.764 00:39:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:13.764 00:39:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.764 00:39:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:13.764 00:39:05 -- common/autotest_common.sh@10 -- # set +x 00:05:13.764 00:39:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:13.764 00:39:05 -- common/autotest_common.sh@850 -- # return 0 00:05:13.764 00:39:05 -- event/event.sh@39 -- # killprocess 1514225 00:05:13.764 00:39:05 -- common/autotest_common.sh@936 -- # '[' -z 1514225 ']' 00:05:13.764 00:39:05 -- common/autotest_common.sh@940 -- # kill -0 1514225 00:05:13.764 00:39:05 -- common/autotest_common.sh@941 -- # uname 00:05:13.764 00:39:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.764 00:39:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1514225 00:05:13.764 00:39:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:13.764 00:39:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:13.764 00:39:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1514225' 00:05:13.764 killing process with pid 1514225 00:05:13.764 00:39:06 -- common/autotest_common.sh@955 -- # kill 1514225 00:05:13.764 00:39:06 -- common/autotest_common.sh@960 -- # wait 1514225 00:05:13.764 spdk_app_start is called in Round 0. 00:05:13.764 Shutdown signal received, stop current app iteration 00:05:13.764 Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 reinitialization... 00:05:13.764 spdk_app_start is called in Round 1. 00:05:13.764 Shutdown signal received, stop current app iteration 00:05:13.764 Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 reinitialization... 00:05:13.764 spdk_app_start is called in Round 2. 00:05:13.764 Shutdown signal received, stop current app iteration 00:05:13.764 Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 reinitialization... 00:05:13.764 spdk_app_start is called in Round 3. 00:05:13.764 Shutdown signal received, stop current app iteration 00:05:13.764 00:39:06 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:13.764 00:39:06 -- event/event.sh@42 -- # return 0 00:05:13.764 00:05:13.764 real 0m16.243s 00:05:13.764 user 0m35.124s 00:05:13.764 sys 0m2.270s 00:05:13.764 00:39:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.764 00:39:06 -- common/autotest_common.sh@10 -- # set +x 00:05:13.764 ************************************ 00:05:13.764 END TEST app_repeat 00:05:13.764 ************************************ 00:05:13.764 00:39:06 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:13.764 00:39:06 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:13.764 00:39:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.764 00:39:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.764 00:39:06 -- common/autotest_common.sh@10 -- # set +x 00:05:13.764 ************************************ 00:05:13.764 START TEST cpu_locks 00:05:13.764 ************************************ 00:05:13.764 00:39:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:14.022 * Looking for test storage... 00:05:14.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:14.022 00:39:06 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:14.022 00:39:06 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:14.022 00:39:06 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:14.022 00:39:06 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:14.022 00:39:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.022 00:39:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.022 00:39:06 -- common/autotest_common.sh@10 -- # set +x 00:05:14.022 ************************************ 00:05:14.022 START TEST default_locks 00:05:14.022 ************************************ 00:05:14.022 00:39:06 -- common/autotest_common.sh@1111 -- # default_locks 00:05:14.022 00:39:06 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1517644 00:05:14.022 00:39:06 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.022 00:39:06 -- event/cpu_locks.sh@47 -- # waitforlisten 1517644 00:05:14.022 00:39:06 -- common/autotest_common.sh@817 -- # '[' -z 1517644 ']' 00:05:14.022 00:39:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.022 00:39:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.022 00:39:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.022 00:39:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.022 00:39:06 -- common/autotest_common.sh@10 -- # set +x 00:05:14.022 [2024-04-27 00:39:06.683436] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:14.022 [2024-04-27 00:39:06.683475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1517644 ] 00:05:14.022 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.281 [2024-04-27 00:39:06.733709] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.281 [2024-04-27 00:39:06.809067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.848 00:39:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:14.848 00:39:07 -- common/autotest_common.sh@850 -- # return 0 00:05:14.848 00:39:07 -- event/cpu_locks.sh@49 -- # locks_exist 1517644 00:05:14.848 00:39:07 -- event/cpu_locks.sh@22 -- # lslocks -p 1517644 00:05:14.848 00:39:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.415 lslocks: write error 00:05:15.415 00:39:07 -- event/cpu_locks.sh@50 -- # killprocess 1517644 00:05:15.415 00:39:07 -- common/autotest_common.sh@936 -- # '[' -z 1517644 ']' 00:05:15.415 00:39:07 -- common/autotest_common.sh@940 -- # kill -0 1517644 00:05:15.415 00:39:07 -- common/autotest_common.sh@941 -- # uname 00:05:15.416 00:39:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.416 00:39:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1517644 00:05:15.416 00:39:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:15.416 00:39:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:15.416 00:39:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1517644' 00:05:15.416 killing process with pid 1517644 00:05:15.416 00:39:07 -- common/autotest_common.sh@955 -- # kill 1517644 00:05:15.416 00:39:07 -- common/autotest_common.sh@960 -- # wait 1517644 00:05:15.675 00:39:08 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1517644 00:05:15.675 00:39:08 -- common/autotest_common.sh@638 -- # local es=0 00:05:15.675 00:39:08 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1517644 00:05:15.675 00:39:08 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:15.675 00:39:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:15.675 00:39:08 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:15.675 00:39:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:15.675 00:39:08 -- common/autotest_common.sh@641 -- # waitforlisten 1517644 00:05:15.675 00:39:08 -- common/autotest_common.sh@817 -- # '[' -z 1517644 ']' 00:05:15.675 00:39:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.675 00:39:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.675 00:39:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.675 00:39:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.675 00:39:08 -- common/autotest_common.sh@10 -- # set +x 00:05:15.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1517644) - No such process 00:05:15.675 ERROR: process (pid: 1517644) is no longer running 00:05:15.675 00:39:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.675 00:39:08 -- common/autotest_common.sh@850 -- # return 1 00:05:15.675 00:39:08 -- common/autotest_common.sh@641 -- # es=1 00:05:15.675 00:39:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:15.675 00:39:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:15.675 00:39:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:15.675 00:39:08 -- event/cpu_locks.sh@54 -- # no_locks 00:05:15.675 00:39:08 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:15.675 00:39:08 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:15.675 00:39:08 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:15.675 00:05:15.675 real 0m1.588s 00:05:15.675 user 0m1.667s 00:05:15.675 sys 0m0.503s 00:05:15.675 00:39:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.675 00:39:08 -- common/autotest_common.sh@10 -- # set +x 00:05:15.675 ************************************ 00:05:15.675 END TEST default_locks 00:05:15.675 ************************************ 00:05:15.675 00:39:08 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:15.675 00:39:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.675 00:39:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.675 00:39:08 -- common/autotest_common.sh@10 -- # set +x 00:05:15.935 ************************************ 00:05:15.935 START TEST default_locks_via_rpc 00:05:15.935 ************************************ 00:05:15.935 00:39:08 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:15.935 00:39:08 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1518020 00:05:15.935 00:39:08 -- event/cpu_locks.sh@63 -- # waitforlisten 1518020 00:05:15.935 00:39:08 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.935 00:39:08 -- common/autotest_common.sh@817 -- # '[' -z 1518020 ']' 00:05:15.935 00:39:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.935 00:39:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.935 00:39:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.935 00:39:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.935 00:39:08 -- common/autotest_common.sh@10 -- # set +x 00:05:15.935 [2024-04-27 00:39:08.449552] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:15.935 [2024-04-27 00:39:08.449597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518020 ] 00:05:15.935 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.935 [2024-04-27 00:39:08.504676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.935 [2024-04-27 00:39:08.578426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.872 00:39:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.872 00:39:09 -- common/autotest_common.sh@850 -- # return 0 00:05:16.872 00:39:09 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:16.872 00:39:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.872 00:39:09 -- common/autotest_common.sh@10 -- # set +x 00:05:16.872 00:39:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.872 00:39:09 -- event/cpu_locks.sh@67 -- # no_locks 00:05:16.872 00:39:09 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:16.872 00:39:09 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:16.872 00:39:09 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:16.872 00:39:09 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:16.872 00:39:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.872 00:39:09 -- common/autotest_common.sh@10 -- # set +x 00:05:16.872 00:39:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.872 00:39:09 -- event/cpu_locks.sh@71 -- # locks_exist 1518020 00:05:16.872 00:39:09 -- event/cpu_locks.sh@22 -- # lslocks -p 1518020 00:05:16.872 00:39:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.872 00:39:09 -- event/cpu_locks.sh@73 -- # killprocess 1518020 00:05:16.872 00:39:09 -- common/autotest_common.sh@936 -- # '[' -z 1518020 ']' 00:05:16.872 00:39:09 -- common/autotest_common.sh@940 -- # kill -0 1518020 00:05:16.872 00:39:09 -- common/autotest_common.sh@941 -- # uname 00:05:16.872 00:39:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.872 00:39:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1518020 00:05:16.872 00:39:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:16.872 00:39:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:16.872 00:39:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1518020' 00:05:16.872 killing process with pid 1518020 00:05:16.872 00:39:09 -- common/autotest_common.sh@955 -- # kill 1518020 00:05:16.872 00:39:09 -- common/autotest_common.sh@960 -- # wait 1518020 00:05:17.131 00:05:17.131 real 0m1.364s 00:05:17.131 user 0m1.448s 00:05:17.131 sys 0m0.388s 00:05:17.131 00:39:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.131 00:39:09 -- common/autotest_common.sh@10 -- # set +x 00:05:17.131 ************************************ 00:05:17.131 END TEST default_locks_via_rpc 00:05:17.131 ************************************ 00:05:17.131 00:39:09 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:17.131 00:39:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.131 00:39:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.131 00:39:09 -- common/autotest_common.sh@10 -- # set +x 00:05:17.390 ************************************ 00:05:17.390 START TEST non_locking_app_on_locked_coremask 00:05:17.390 ************************************ 00:05:17.390 00:39:09 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:17.390 00:39:09 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1518343 00:05:17.390 00:39:09 -- event/cpu_locks.sh@81 -- # waitforlisten 1518343 /var/tmp/spdk.sock 00:05:17.390 00:39:09 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.390 00:39:09 -- common/autotest_common.sh@817 -- # '[' -z 1518343 ']' 00:05:17.390 00:39:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.390 00:39:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.390 00:39:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.390 00:39:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.390 00:39:09 -- common/autotest_common.sh@10 -- # set +x 00:05:17.390 [2024-04-27 00:39:09.974867] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:17.390 [2024-04-27 00:39:09.974909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518343 ] 00:05:17.390 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.390 [2024-04-27 00:39:10.031457] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.649 [2024-04-27 00:39:10.123989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.217 00:39:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:18.217 00:39:10 -- common/autotest_common.sh@850 -- # return 0 00:05:18.217 00:39:10 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:18.217 00:39:10 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1518521 00:05:18.217 00:39:10 -- event/cpu_locks.sh@85 -- # waitforlisten 1518521 /var/tmp/spdk2.sock 00:05:18.217 00:39:10 -- common/autotest_common.sh@817 -- # '[' -z 1518521 ']' 00:05:18.217 00:39:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.218 00:39:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:18.218 00:39:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.218 00:39:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:18.218 00:39:10 -- common/autotest_common.sh@10 -- # set +x 00:05:18.218 [2024-04-27 00:39:10.809475] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:18.218 [2024-04-27 00:39:10.809521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1518521 ] 00:05:18.218 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.218 [2024-04-27 00:39:10.883196] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.218 [2024-04-27 00:39:10.883216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.477 [2024-04-27 00:39:11.028367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.044 00:39:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:19.044 00:39:11 -- common/autotest_common.sh@850 -- # return 0 00:05:19.044 00:39:11 -- event/cpu_locks.sh@87 -- # locks_exist 1518343 00:05:19.044 00:39:11 -- event/cpu_locks.sh@22 -- # lslocks -p 1518343 00:05:19.044 00:39:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.611 lslocks: write error 00:05:19.611 00:39:12 -- event/cpu_locks.sh@89 -- # killprocess 1518343 00:05:19.611 00:39:12 -- common/autotest_common.sh@936 -- # '[' -z 1518343 ']' 00:05:19.611 00:39:12 -- common/autotest_common.sh@940 -- # kill -0 1518343 00:05:19.611 00:39:12 -- common/autotest_common.sh@941 -- # uname 00:05:19.611 00:39:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.611 00:39:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1518343 00:05:19.611 00:39:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.611 00:39:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.611 00:39:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1518343' 00:05:19.611 killing process with pid 1518343 00:05:19.611 00:39:12 -- common/autotest_common.sh@955 -- # kill 1518343 00:05:19.611 00:39:12 -- common/autotest_common.sh@960 -- # wait 1518343 00:05:20.179 00:39:12 -- event/cpu_locks.sh@90 -- # killprocess 1518521 00:05:20.179 00:39:12 -- common/autotest_common.sh@936 -- # '[' -z 1518521 ']' 00:05:20.179 00:39:12 -- common/autotest_common.sh@940 -- # kill -0 1518521 00:05:20.179 00:39:12 -- common/autotest_common.sh@941 -- # uname 00:05:20.179 00:39:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:20.179 00:39:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1518521 00:05:20.179 00:39:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:20.179 00:39:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:20.179 00:39:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1518521' 00:05:20.179 killing process with pid 1518521 00:05:20.179 00:39:12 -- common/autotest_common.sh@955 -- # kill 1518521 00:05:20.179 00:39:12 -- common/autotest_common.sh@960 -- # wait 1518521 00:05:20.455 00:05:20.455 real 0m3.184s 00:05:20.455 user 0m3.388s 00:05:20.455 sys 0m0.886s 00:05:20.455 00:39:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.455 00:39:13 -- common/autotest_common.sh@10 -- # set +x 00:05:20.455 ************************************ 00:05:20.455 END TEST non_locking_app_on_locked_coremask 00:05:20.455 ************************************ 00:05:20.455 00:39:13 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:20.455 00:39:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:20.455 00:39:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.455 00:39:13 -- common/autotest_common.sh@10 -- # set +x 00:05:20.717 ************************************ 00:05:20.717 START TEST locking_app_on_unlocked_coremask 00:05:20.717 ************************************ 00:05:20.717 00:39:13 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:20.717 00:39:13 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1519017 00:05:20.717 00:39:13 -- event/cpu_locks.sh@99 -- # waitforlisten 1519017 /var/tmp/spdk.sock 00:05:20.717 00:39:13 -- common/autotest_common.sh@817 -- # '[' -z 1519017 ']' 00:05:20.717 00:39:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.717 00:39:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:20.717 00:39:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.717 00:39:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:20.717 00:39:13 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:20.717 00:39:13 -- common/autotest_common.sh@10 -- # set +x 00:05:20.717 [2024-04-27 00:39:13.298200] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:20.717 [2024-04-27 00:39:13.298242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519017 ] 00:05:20.717 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.717 [2024-04-27 00:39:13.351194] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.717 [2024-04-27 00:39:13.351219] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.976 [2024-04-27 00:39:13.430365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.544 00:39:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:21.544 00:39:14 -- common/autotest_common.sh@850 -- # return 0 00:05:21.544 00:39:14 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1519105 00:05:21.544 00:39:14 -- event/cpu_locks.sh@103 -- # waitforlisten 1519105 /var/tmp/spdk2.sock 00:05:21.544 00:39:14 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.544 00:39:14 -- common/autotest_common.sh@817 -- # '[' -z 1519105 ']' 00:05:21.544 00:39:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.544 00:39:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:21.544 00:39:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.544 00:39:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:21.544 00:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:21.544 [2024-04-27 00:39:14.144411] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:21.544 [2024-04-27 00:39:14.144461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519105 ] 00:05:21.544 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.544 [2024-04-27 00:39:14.219395] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.803 [2024-04-27 00:39:14.370546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.371 00:39:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:22.371 00:39:14 -- common/autotest_common.sh@850 -- # return 0 00:05:22.371 00:39:14 -- event/cpu_locks.sh@105 -- # locks_exist 1519105 00:05:22.371 00:39:14 -- event/cpu_locks.sh@22 -- # lslocks -p 1519105 00:05:22.371 00:39:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.939 lslocks: write error 00:05:22.939 00:39:15 -- event/cpu_locks.sh@107 -- # killprocess 1519017 00:05:22.939 00:39:15 -- common/autotest_common.sh@936 -- # '[' -z 1519017 ']' 00:05:22.939 00:39:15 -- common/autotest_common.sh@940 -- # kill -0 1519017 00:05:22.939 00:39:15 -- common/autotest_common.sh@941 -- # uname 00:05:22.939 00:39:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:22.939 00:39:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1519017 00:05:22.939 00:39:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:22.939 00:39:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:22.939 00:39:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1519017' 00:05:22.939 killing process with pid 1519017 00:05:22.939 00:39:15 -- common/autotest_common.sh@955 -- # kill 1519017 00:05:22.939 00:39:15 -- common/autotest_common.sh@960 -- # wait 1519017 00:05:23.508 00:39:16 -- event/cpu_locks.sh@108 -- # killprocess 1519105 00:05:23.508 00:39:16 -- common/autotest_common.sh@936 -- # '[' -z 1519105 ']' 00:05:23.508 00:39:16 -- common/autotest_common.sh@940 -- # kill -0 1519105 00:05:23.508 00:39:16 -- common/autotest_common.sh@941 -- # uname 00:05:23.508 00:39:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:23.508 00:39:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1519105 00:05:23.767 00:39:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:23.767 00:39:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:23.767 00:39:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1519105' 00:05:23.767 killing process with pid 1519105 00:05:23.767 00:39:16 -- common/autotest_common.sh@955 -- # kill 1519105 00:05:23.767 00:39:16 -- common/autotest_common.sh@960 -- # wait 1519105 00:05:24.027 00:05:24.027 real 0m3.305s 00:05:24.027 user 0m3.543s 00:05:24.027 sys 0m0.935s 00:05:24.027 00:39:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.027 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.027 ************************************ 00:05:24.027 END TEST locking_app_on_unlocked_coremask 00:05:24.027 ************************************ 00:05:24.027 00:39:16 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:24.027 00:39:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.027 00:39:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.027 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.027 ************************************ 00:05:24.027 START TEST locking_app_on_locked_coremask 00:05:24.027 ************************************ 00:05:24.027 00:39:16 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:24.027 00:39:16 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1519535 00:05:24.027 00:39:16 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.027 00:39:16 -- event/cpu_locks.sh@116 -- # waitforlisten 1519535 /var/tmp/spdk.sock 00:05:24.027 00:39:16 -- common/autotest_common.sh@817 -- # '[' -z 1519535 ']' 00:05:24.027 00:39:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.027 00:39:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:24.027 00:39:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.027 00:39:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:24.027 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.287 [2024-04-27 00:39:16.756524] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:24.287 [2024-04-27 00:39:16.756566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519535 ] 00:05:24.287 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.287 [2024-04-27 00:39:16.811447] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.287 [2024-04-27 00:39:16.890365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.224 00:39:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:25.224 00:39:17 -- common/autotest_common.sh@850 -- # return 0 00:05:25.224 00:39:17 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1519755 00:05:25.224 00:39:17 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1519755 /var/tmp/spdk2.sock 00:05:25.224 00:39:17 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.224 00:39:17 -- common/autotest_common.sh@638 -- # local es=0 00:05:25.224 00:39:17 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1519755 /var/tmp/spdk2.sock 00:05:25.224 00:39:17 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:25.224 00:39:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:25.224 00:39:17 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:25.224 00:39:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:25.224 00:39:17 -- common/autotest_common.sh@641 -- # waitforlisten 1519755 /var/tmp/spdk2.sock 00:05:25.224 00:39:17 -- common/autotest_common.sh@817 -- # '[' -z 1519755 ']' 00:05:25.224 00:39:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.224 00:39:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.224 00:39:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.224 00:39:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.224 00:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:25.224 [2024-04-27 00:39:17.592743] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:25.224 [2024-04-27 00:39:17.592794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1519755 ] 00:05:25.224 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.224 [2024-04-27 00:39:17.670414] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1519535 has claimed it. 00:05:25.224 [2024-04-27 00:39:17.670451] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:25.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1519755) - No such process 00:05:25.793 ERROR: process (pid: 1519755) is no longer running 00:05:25.793 00:39:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:25.793 00:39:18 -- common/autotest_common.sh@850 -- # return 1 00:05:25.793 00:39:18 -- common/autotest_common.sh@641 -- # es=1 00:05:25.793 00:39:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:25.793 00:39:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:25.793 00:39:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:25.793 00:39:18 -- event/cpu_locks.sh@122 -- # locks_exist 1519535 00:05:25.793 00:39:18 -- event/cpu_locks.sh@22 -- # lslocks -p 1519535 00:05:25.793 00:39:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.052 lslocks: write error 00:05:26.052 00:39:18 -- event/cpu_locks.sh@124 -- # killprocess 1519535 00:05:26.052 00:39:18 -- common/autotest_common.sh@936 -- # '[' -z 1519535 ']' 00:05:26.052 00:39:18 -- common/autotest_common.sh@940 -- # kill -0 1519535 00:05:26.052 00:39:18 -- common/autotest_common.sh@941 -- # uname 00:05:26.052 00:39:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.052 00:39:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1519535 00:05:26.052 00:39:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.052 00:39:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.052 00:39:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1519535' 00:05:26.052 killing process with pid 1519535 00:05:26.052 00:39:18 -- common/autotest_common.sh@955 -- # kill 1519535 00:05:26.052 00:39:18 -- common/autotest_common.sh@960 -- # wait 1519535 00:05:26.310 00:05:26.310 real 0m2.227s 00:05:26.310 user 0m2.464s 00:05:26.310 sys 0m0.560s 00:05:26.310 00:39:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.310 00:39:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.310 ************************************ 00:05:26.310 END TEST locking_app_on_locked_coremask 00:05:26.310 ************************************ 00:05:26.310 00:39:18 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:26.310 00:39:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.310 00:39:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.310 00:39:18 -- common/autotest_common.sh@10 -- # set +x 00:05:26.569 ************************************ 00:05:26.569 START TEST locking_overlapped_coremask 00:05:26.569 ************************************ 00:05:26.569 00:39:19 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:26.569 00:39:19 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1520033 00:05:26.569 00:39:19 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:26.569 00:39:19 -- event/cpu_locks.sh@133 -- # waitforlisten 1520033 /var/tmp/spdk.sock 00:05:26.569 00:39:19 -- common/autotest_common.sh@817 -- # '[' -z 1520033 ']' 00:05:26.569 00:39:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.569 00:39:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:26.569 00:39:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.569 00:39:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:26.569 00:39:19 -- common/autotest_common.sh@10 -- # set +x 00:05:26.569 [2024-04-27 00:39:19.143068] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:26.569 [2024-04-27 00:39:19.143111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520033 ] 00:05:26.569 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.569 [2024-04-27 00:39:19.197347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.828 [2024-04-27 00:39:19.277949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.828 [2024-04-27 00:39:19.278040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.828 [2024-04-27 00:39:19.278038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.395 00:39:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.395 00:39:19 -- common/autotest_common.sh@850 -- # return 0 00:05:27.396 00:39:19 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1520260 00:05:27.396 00:39:19 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1520260 /var/tmp/spdk2.sock 00:05:27.396 00:39:19 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:27.396 00:39:19 -- common/autotest_common.sh@638 -- # local es=0 00:05:27.396 00:39:19 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 1520260 /var/tmp/spdk2.sock 00:05:27.396 00:39:19 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:27.396 00:39:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.396 00:39:19 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:27.396 00:39:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.396 00:39:19 -- common/autotest_common.sh@641 -- # waitforlisten 1520260 /var/tmp/spdk2.sock 00:05:27.396 00:39:19 -- common/autotest_common.sh@817 -- # '[' -z 1520260 ']' 00:05:27.396 00:39:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.396 00:39:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:27.396 00:39:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.396 00:39:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:27.396 00:39:19 -- common/autotest_common.sh@10 -- # set +x 00:05:27.396 [2024-04-27 00:39:20.005130] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:27.396 [2024-04-27 00:39:20.005177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520260 ] 00:05:27.396 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.396 [2024-04-27 00:39:20.084364] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1520033 has claimed it. 00:05:27.396 [2024-04-27 00:39:20.084403] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:27.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (1520260) - No such process 00:05:27.962 ERROR: process (pid: 1520260) is no longer running 00:05:27.962 00:39:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:27.962 00:39:20 -- common/autotest_common.sh@850 -- # return 1 00:05:27.962 00:39:20 -- common/autotest_common.sh@641 -- # es=1 00:05:27.962 00:39:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:27.962 00:39:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:27.962 00:39:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:27.962 00:39:20 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:27.962 00:39:20 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.962 00:39:20 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.962 00:39:20 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.962 00:39:20 -- event/cpu_locks.sh@141 -- # killprocess 1520033 00:05:27.962 00:39:20 -- common/autotest_common.sh@936 -- # '[' -z 1520033 ']' 00:05:27.962 00:39:20 -- common/autotest_common.sh@940 -- # kill -0 1520033 00:05:27.962 00:39:20 -- common/autotest_common.sh@941 -- # uname 00:05:27.962 00:39:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:27.962 00:39:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1520033 00:05:28.221 00:39:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.221 00:39:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.221 00:39:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1520033' 00:05:28.221 killing process with pid 1520033 00:05:28.221 00:39:20 -- common/autotest_common.sh@955 -- # kill 1520033 00:05:28.221 00:39:20 -- common/autotest_common.sh@960 -- # wait 1520033 00:05:28.481 00:05:28.481 real 0m1.927s 00:05:28.481 user 0m5.438s 00:05:28.481 sys 0m0.383s 00:05:28.481 00:39:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.481 00:39:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.481 ************************************ 00:05:28.481 END TEST locking_overlapped_coremask 00:05:28.481 ************************************ 00:05:28.481 00:39:21 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:28.481 00:39:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.481 00:39:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.481 00:39:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.739 ************************************ 00:05:28.739 START TEST locking_overlapped_coremask_via_rpc 00:05:28.739 ************************************ 00:05:28.739 00:39:21 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:05:28.739 00:39:21 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1520505 00:05:28.739 00:39:21 -- event/cpu_locks.sh@149 -- # waitforlisten 1520505 /var/tmp/spdk.sock 00:05:28.739 00:39:21 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:28.739 00:39:21 -- common/autotest_common.sh@817 -- # '[' -z 1520505 ']' 00:05:28.739 00:39:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.739 00:39:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:28.739 00:39:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.739 00:39:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:28.739 00:39:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.739 [2024-04-27 00:39:21.237199] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:28.739 [2024-04-27 00:39:21.237245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520505 ] 00:05:28.739 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.739 [2024-04-27 00:39:21.292801] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.739 [2024-04-27 00:39:21.292827] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.739 [2024-04-27 00:39:21.374060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.739 [2024-04-27 00:39:21.374155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.739 [2024-04-27 00:39:21.374156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.678 00:39:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:29.678 00:39:22 -- common/autotest_common.sh@850 -- # return 0 00:05:29.678 00:39:22 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1520545 00:05:29.678 00:39:22 -- event/cpu_locks.sh@153 -- # waitforlisten 1520545 /var/tmp/spdk2.sock 00:05:29.678 00:39:22 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:29.678 00:39:22 -- common/autotest_common.sh@817 -- # '[' -z 1520545 ']' 00:05:29.678 00:39:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.678 00:39:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.678 00:39:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.678 00:39:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.678 00:39:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.678 [2024-04-27 00:39:22.080083] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:29.678 [2024-04-27 00:39:22.080130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520545 ] 00:05:29.678 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.678 [2024-04-27 00:39:22.154294] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.678 [2024-04-27 00:39:22.154318] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.678 [2024-04-27 00:39:22.299957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.678 [2024-04-27 00:39:22.303116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.678 [2024-04-27 00:39:22.303117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:30.245 00:39:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.245 00:39:22 -- common/autotest_common.sh@850 -- # return 0 00:05:30.245 00:39:22 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:30.245 00:39:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.245 00:39:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.245 00:39:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.245 00:39:22 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.245 00:39:22 -- common/autotest_common.sh@638 -- # local es=0 00:05:30.245 00:39:22 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.245 00:39:22 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:30.245 00:39:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:30.245 00:39:22 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:30.245 00:39:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:30.245 00:39:22 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:30.245 00:39:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.246 00:39:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.246 [2024-04-27 00:39:22.899140] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1520505 has claimed it. 00:05:30.246 request: 00:05:30.246 { 00:05:30.246 "method": "framework_enable_cpumask_locks", 00:05:30.246 "req_id": 1 00:05:30.246 } 00:05:30.246 Got JSON-RPC error response 00:05:30.246 response: 00:05:30.246 { 00:05:30.246 "code": -32603, 00:05:30.246 "message": "Failed to claim CPU core: 2" 00:05:30.246 } 00:05:30.246 00:39:22 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:30.246 00:39:22 -- common/autotest_common.sh@641 -- # es=1 00:05:30.246 00:39:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:30.246 00:39:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:30.246 00:39:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:30.246 00:39:22 -- event/cpu_locks.sh@158 -- # waitforlisten 1520505 /var/tmp/spdk.sock 00:05:30.246 00:39:22 -- common/autotest_common.sh@817 -- # '[' -z 1520505 ']' 00:05:30.246 00:39:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.246 00:39:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.246 00:39:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.246 00:39:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.246 00:39:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.503 00:39:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.503 00:39:23 -- common/autotest_common.sh@850 -- # return 0 00:05:30.503 00:39:23 -- event/cpu_locks.sh@159 -- # waitforlisten 1520545 /var/tmp/spdk2.sock 00:05:30.503 00:39:23 -- common/autotest_common.sh@817 -- # '[' -z 1520545 ']' 00:05:30.503 00:39:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.503 00:39:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:30.503 00:39:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.503 00:39:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:30.503 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:30.760 00:39:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.760 00:39:23 -- common/autotest_common.sh@850 -- # return 0 00:05:30.760 00:39:23 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:30.760 00:39:23 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:30.760 00:39:23 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:30.760 00:39:23 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:30.760 00:05:30.760 real 0m2.083s 00:05:30.760 user 0m0.875s 00:05:30.760 sys 0m0.138s 00:05:30.760 00:39:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:30.760 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:30.760 ************************************ 00:05:30.760 END TEST locking_overlapped_coremask_via_rpc 00:05:30.760 ************************************ 00:05:30.760 00:39:23 -- event/cpu_locks.sh@174 -- # cleanup 00:05:30.760 00:39:23 -- event/cpu_locks.sh@15 -- # [[ -z 1520505 ]] 00:05:30.760 00:39:23 -- event/cpu_locks.sh@15 -- # killprocess 1520505 00:05:30.760 00:39:23 -- common/autotest_common.sh@936 -- # '[' -z 1520505 ']' 00:05:30.760 00:39:23 -- common/autotest_common.sh@940 -- # kill -0 1520505 00:05:30.760 00:39:23 -- common/autotest_common.sh@941 -- # uname 00:05:30.760 00:39:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:30.760 00:39:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1520505 00:05:30.760 00:39:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:30.760 00:39:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:30.760 00:39:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1520505' 00:05:30.760 killing process with pid 1520505 00:05:30.760 00:39:23 -- common/autotest_common.sh@955 -- # kill 1520505 00:05:30.760 00:39:23 -- common/autotest_common.sh@960 -- # wait 1520505 00:05:31.019 00:39:23 -- event/cpu_locks.sh@16 -- # [[ -z 1520545 ]] 00:05:31.019 00:39:23 -- event/cpu_locks.sh@16 -- # killprocess 1520545 00:05:31.019 00:39:23 -- common/autotest_common.sh@936 -- # '[' -z 1520545 ']' 00:05:31.019 00:39:23 -- common/autotest_common.sh@940 -- # kill -0 1520545 00:05:31.019 00:39:23 -- common/autotest_common.sh@941 -- # uname 00:05:31.019 00:39:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.019 00:39:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1520545 00:05:31.277 00:39:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:31.277 00:39:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:31.277 00:39:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1520545' 00:05:31.277 killing process with pid 1520545 00:05:31.277 00:39:23 -- common/autotest_common.sh@955 -- # kill 1520545 00:05:31.277 00:39:23 -- common/autotest_common.sh@960 -- # wait 1520545 00:05:31.535 00:39:24 -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.535 00:39:24 -- event/cpu_locks.sh@1 -- # cleanup 00:05:31.535 00:39:24 -- event/cpu_locks.sh@15 -- # [[ -z 1520505 ]] 00:05:31.535 00:39:24 -- event/cpu_locks.sh@15 -- # killprocess 1520505 00:05:31.535 00:39:24 -- common/autotest_common.sh@936 -- # '[' -z 1520505 ']' 00:05:31.535 00:39:24 -- common/autotest_common.sh@940 -- # kill -0 1520505 00:05:31.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1520505) - No such process 00:05:31.535 00:39:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1520505 is not found' 00:05:31.535 Process with pid 1520505 is not found 00:05:31.535 00:39:24 -- event/cpu_locks.sh@16 -- # [[ -z 1520545 ]] 00:05:31.535 00:39:24 -- event/cpu_locks.sh@16 -- # killprocess 1520545 00:05:31.535 00:39:24 -- common/autotest_common.sh@936 -- # '[' -z 1520545 ']' 00:05:31.536 00:39:24 -- common/autotest_common.sh@940 -- # kill -0 1520545 00:05:31.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1520545) - No such process 00:05:31.536 00:39:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1520545 is not found' 00:05:31.536 Process with pid 1520545 is not found 00:05:31.536 00:39:24 -- event/cpu_locks.sh@18 -- # rm -f 00:05:31.536 00:05:31.536 real 0m17.678s 00:05:31.536 user 0m29.653s 00:05:31.536 sys 0m4.982s 00:05:31.536 00:39:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.536 00:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.536 ************************************ 00:05:31.536 END TEST cpu_locks 00:05:31.536 ************************************ 00:05:31.536 00:05:31.536 real 0m43.937s 00:05:31.536 user 1m22.436s 00:05:31.536 sys 0m8.521s 00:05:31.536 00:39:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.536 00:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.536 ************************************ 00:05:31.536 END TEST event 00:05:31.536 ************************************ 00:05:31.536 00:39:24 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:31.536 00:39:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.536 00:39:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.536 00:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.793 ************************************ 00:05:31.793 START TEST thread 00:05:31.793 ************************************ 00:05:31.793 00:39:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:31.793 * Looking for test storage... 00:05:31.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:31.793 00:39:24 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:31.793 00:39:24 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:31.793 00:39:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.793 00:39:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.793 ************************************ 00:05:31.793 START TEST thread_poller_perf 00:05:31.793 ************************************ 00:05:31.793 00:39:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.051 [2024-04-27 00:39:24.504211] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:32.051 [2024-04-27 00:39:24.504281] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521108 ] 00:05:32.051 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.051 [2024-04-27 00:39:24.560451] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.051 [2024-04-27 00:39:24.631007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.051 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:33.427 ====================================== 00:05:33.427 busy:2305720712 (cyc) 00:05:33.427 total_run_count: 400000 00:05:33.427 tsc_hz: 2300000000 (cyc) 00:05:33.427 ====================================== 00:05:33.427 poller_cost: 5764 (cyc), 2506 (nsec) 00:05:33.427 00:05:33.427 real 0m1.243s 00:05:33.427 user 0m1.174s 00:05:33.427 sys 0m0.065s 00:05:33.427 00:39:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.427 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:33.427 ************************************ 00:05:33.427 END TEST thread_poller_perf 00:05:33.427 ************************************ 00:05:33.427 00:39:25 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.427 00:39:25 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:33.427 00:39:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.427 00:39:25 -- common/autotest_common.sh@10 -- # set +x 00:05:33.427 ************************************ 00:05:33.427 START TEST thread_poller_perf 00:05:33.427 ************************************ 00:05:33.427 00:39:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:33.427 [2024-04-27 00:39:25.889177] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:33.427 [2024-04-27 00:39:25.889247] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521362 ] 00:05:33.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.427 [2024-04-27 00:39:25.945686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.427 [2024-04-27 00:39:26.016815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.427 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:34.816 ====================================== 00:05:34.816 busy:2301558312 (cyc) 00:05:34.816 total_run_count: 5469000 00:05:34.816 tsc_hz: 2300000000 (cyc) 00:05:34.816 ====================================== 00:05:34.816 poller_cost: 420 (cyc), 182 (nsec) 00:05:34.816 00:05:34.816 real 0m1.231s 00:05:34.816 user 0m1.157s 00:05:34.816 sys 0m0.070s 00:05:34.816 00:39:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.816 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:34.816 ************************************ 00:05:34.816 END TEST thread_poller_perf 00:05:34.816 ************************************ 00:05:34.816 00:39:27 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:34.816 00:05:34.816 real 0m2.840s 00:05:34.816 user 0m2.478s 00:05:34.816 sys 0m0.339s 00:05:34.816 00:39:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.816 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:34.816 ************************************ 00:05:34.816 END TEST thread 00:05:34.816 ************************************ 00:05:34.816 00:39:27 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:34.816 00:39:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.816 00:39:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.816 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:34.816 ************************************ 00:05:34.816 START TEST accel 00:05:34.816 ************************************ 00:05:34.816 00:39:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:34.816 * Looking for test storage... 00:05:34.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:34.816 00:39:27 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:34.816 00:39:27 -- accel/accel.sh@82 -- # get_expected_opcs 00:05:34.816 00:39:27 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:34.816 00:39:27 -- accel/accel.sh@62 -- # spdk_tgt_pid=1521664 00:05:34.816 00:39:27 -- accel/accel.sh@63 -- # waitforlisten 1521664 00:05:34.816 00:39:27 -- common/autotest_common.sh@817 -- # '[' -z 1521664 ']' 00:05:34.816 00:39:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.816 00:39:27 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:34.816 00:39:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.816 00:39:27 -- accel/accel.sh@61 -- # build_accel_config 00:05:34.816 00:39:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.816 00:39:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.816 00:39:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.816 00:39:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.816 00:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:34.816 00:39:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.816 00:39:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.816 00:39:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.816 00:39:27 -- accel/accel.sh@40 -- # local IFS=, 00:05:34.816 00:39:27 -- accel/accel.sh@41 -- # jq -r . 00:05:34.816 [2024-04-27 00:39:27.411863] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:34.816 [2024-04-27 00:39:27.411909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521664 ] 00:05:34.816 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.816 [2024-04-27 00:39:27.465241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.074 [2024-04-27 00:39:27.539565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.641 00:39:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:35.641 00:39:28 -- common/autotest_common.sh@850 -- # return 0 00:05:35.641 00:39:28 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:35.641 00:39:28 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:35.641 00:39:28 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:35.641 00:39:28 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:35.641 00:39:28 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:35.641 00:39:28 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:35.641 00:39:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:35.641 00:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:35.641 00:39:28 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:35.641 00:39:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # IFS== 00:05:35.641 00:39:28 -- accel/accel.sh@72 -- # read -r opc module 00:05:35.641 00:39:28 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:35.641 00:39:28 -- accel/accel.sh@75 -- # killprocess 1521664 00:05:35.642 00:39:28 -- common/autotest_common.sh@936 -- # '[' -z 1521664 ']' 00:05:35.642 00:39:28 -- common/autotest_common.sh@940 -- # kill -0 1521664 00:05:35.642 00:39:28 -- common/autotest_common.sh@941 -- # uname 00:05:35.642 00:39:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.642 00:39:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1521664 00:05:35.642 00:39:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:35.642 00:39:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:35.642 00:39:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1521664' 00:05:35.642 killing process with pid 1521664 00:05:35.642 00:39:28 -- common/autotest_common.sh@955 -- # kill 1521664 00:05:35.642 00:39:28 -- common/autotest_common.sh@960 -- # wait 1521664 00:05:36.209 00:39:28 -- accel/accel.sh@76 -- # trap - ERR 00:05:36.209 00:39:28 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:36.209 00:39:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:05:36.209 00:39:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.209 00:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.209 00:39:28 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:05:36.209 00:39:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:36.209 00:39:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.209 00:39:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.209 00:39:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.209 00:39:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.209 00:39:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.209 00:39:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.209 00:39:28 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.209 00:39:28 -- accel/accel.sh@41 -- # jq -r . 00:05:36.209 00:39:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.209 00:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.209 00:39:28 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:36.209 00:39:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:36.209 00:39:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.209 00:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:36.501 ************************************ 00:05:36.501 START TEST accel_missing_filename 00:05:36.501 ************************************ 00:05:36.501 00:39:28 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:05:36.501 00:39:28 -- common/autotest_common.sh@638 -- # local es=0 00:05:36.501 00:39:28 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:36.501 00:39:28 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:36.501 00:39:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:36.501 00:39:28 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:36.501 00:39:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:36.501 00:39:28 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:05:36.501 00:39:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:36.501 00:39:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.501 00:39:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.501 00:39:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.501 00:39:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.501 00:39:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.501 00:39:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.501 00:39:28 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.501 00:39:28 -- accel/accel.sh@41 -- # jq -r . 00:05:36.501 [2024-04-27 00:39:28.976766] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:36.501 [2024-04-27 00:39:28.976812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521947 ] 00:05:36.501 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.501 [2024-04-27 00:39:29.031466] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.501 [2024-04-27 00:39:29.108092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.501 [2024-04-27 00:39:29.149382] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:36.801 [2024-04-27 00:39:29.210286] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:36.801 A filename is required. 00:05:36.801 00:39:29 -- common/autotest_common.sh@641 -- # es=234 00:05:36.801 00:39:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:36.801 00:39:29 -- common/autotest_common.sh@650 -- # es=106 00:05:36.801 00:39:29 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:36.801 00:39:29 -- common/autotest_common.sh@658 -- # es=1 00:05:36.801 00:39:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:36.801 00:05:36.801 real 0m0.353s 00:05:36.801 user 0m0.285s 00:05:36.801 sys 0m0.104s 00:05:36.801 00:39:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.801 00:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:36.801 ************************************ 00:05:36.801 END TEST accel_missing_filename 00:05:36.801 ************************************ 00:05:36.801 00:39:29 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:36.801 00:39:29 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:36.801 00:39:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.802 00:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:36.802 ************************************ 00:05:36.802 START TEST accel_compress_verify 00:05:36.802 ************************************ 00:05:36.802 00:39:29 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:36.802 00:39:29 -- common/autotest_common.sh@638 -- # local es=0 00:05:36.802 00:39:29 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:36.802 00:39:29 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:36.802 00:39:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:36.802 00:39:29 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:36.802 00:39:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:36.802 00:39:29 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:36.802 00:39:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.802 00:39:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:36.802 00:39:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.802 00:39:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.802 00:39:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.802 00:39:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.802 00:39:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.802 00:39:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.802 00:39:29 -- accel/accel.sh@41 -- # jq -r . 00:05:36.802 [2024-04-27 00:39:29.458399] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:36.802 [2024-04-27 00:39:29.458464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522198 ] 00:05:37.061 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.061 [2024-04-27 00:39:29.515580] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.061 [2024-04-27 00:39:29.586745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.061 [2024-04-27 00:39:29.627401] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:37.061 [2024-04-27 00:39:29.686880] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:05:37.320 00:05:37.320 Compression does not support the verify option, aborting. 00:05:37.320 00:39:29 -- common/autotest_common.sh@641 -- # es=161 00:05:37.320 00:39:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:37.320 00:39:29 -- common/autotest_common.sh@650 -- # es=33 00:05:37.320 00:39:29 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:37.320 00:39:29 -- common/autotest_common.sh@658 -- # es=1 00:05:37.320 00:39:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:37.320 00:05:37.320 real 0m0.351s 00:05:37.320 user 0m0.266s 00:05:37.320 sys 0m0.111s 00:05:37.320 00:39:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.320 00:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:37.320 ************************************ 00:05:37.320 END TEST accel_compress_verify 00:05:37.320 ************************************ 00:05:37.320 00:39:29 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:37.320 00:39:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:37.320 00:39:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.320 00:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:37.320 ************************************ 00:05:37.320 START TEST accel_wrong_workload 00:05:37.320 ************************************ 00:05:37.320 00:39:29 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:05:37.320 00:39:29 -- common/autotest_common.sh@638 -- # local es=0 00:05:37.320 00:39:29 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:37.320 00:39:29 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:37.320 00:39:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:37.320 00:39:29 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:37.320 00:39:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:37.320 00:39:29 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:05:37.320 00:39:29 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.320 00:39:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:37.320 00:39:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.320 00:39:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.320 00:39:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.320 00:39:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.320 00:39:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.320 00:39:29 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.320 00:39:29 -- accel/accel.sh@41 -- # jq -r . 00:05:37.320 Unsupported workload type: foobar 00:05:37.320 [2024-04-27 00:39:29.954789] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:37.320 accel_perf options: 00:05:37.320 [-h help message] 00:05:37.320 [-q queue depth per core] 00:05:37.320 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:37.320 [-T number of threads per core 00:05:37.320 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:37.320 [-t time in seconds] 00:05:37.320 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:37.320 [ dif_verify, , dif_generate, dif_generate_copy 00:05:37.320 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:37.320 [-l for compress/decompress workloads, name of uncompressed input file 00:05:37.320 [-S for crc32c workload, use this seed value (default 0) 00:05:37.320 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:37.320 [-f for fill workload, use this BYTE value (default 255) 00:05:37.320 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:37.320 [-y verify result if this switch is on] 00:05:37.320 [-a tasks to allocate per core (default: same value as -q)] 00:05:37.320 Can be used to spread operations across a wider range of memory. 00:05:37.320 00:39:29 -- common/autotest_common.sh@641 -- # es=1 00:05:37.320 00:39:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:37.320 00:39:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:37.320 00:39:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:37.320 00:05:37.320 real 0m0.022s 00:05:37.320 user 0m0.013s 00:05:37.320 sys 0m0.009s 00:05:37.320 00:39:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.320 00:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:37.320 ************************************ 00:05:37.320 END TEST accel_wrong_workload 00:05:37.320 ************************************ 00:05:37.320 Error: writing output failed: Broken pipe 00:05:37.320 00:39:29 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:37.320 00:39:29 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:05:37.320 00:39:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.320 00:39:29 -- common/autotest_common.sh@10 -- # set +x 00:05:37.579 ************************************ 00:05:37.579 START TEST accel_negative_buffers 00:05:37.579 ************************************ 00:05:37.579 00:39:30 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:37.579 00:39:30 -- common/autotest_common.sh@638 -- # local es=0 00:05:37.579 00:39:30 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:37.579 00:39:30 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:05:37.579 00:39:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:37.579 00:39:30 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:05:37.579 00:39:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:37.579 00:39:30 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:05:37.579 00:39:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:37.579 00:39:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.579 00:39:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.579 00:39:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.579 00:39:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.579 00:39:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.579 00:39:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.579 00:39:30 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.579 00:39:30 -- accel/accel.sh@41 -- # jq -r . 00:05:37.579 -x option must be non-negative. 00:05:37.579 [2024-04-27 00:39:30.139269] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:37.579 accel_perf options: 00:05:37.579 [-h help message] 00:05:37.579 [-q queue depth per core] 00:05:37.579 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:37.579 [-T number of threads per core 00:05:37.579 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:37.579 [-t time in seconds] 00:05:37.579 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:37.579 [ dif_verify, , dif_generate, dif_generate_copy 00:05:37.579 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:37.579 [-l for compress/decompress workloads, name of uncompressed input file 00:05:37.579 [-S for crc32c workload, use this seed value (default 0) 00:05:37.579 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:37.579 [-f for fill workload, use this BYTE value (default 255) 00:05:37.579 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:37.579 [-y verify result if this switch is on] 00:05:37.579 [-a tasks to allocate per core (default: same value as -q)] 00:05:37.579 Can be used to spread operations across a wider range of memory. 00:05:37.579 00:39:30 -- common/autotest_common.sh@641 -- # es=1 00:05:37.579 00:39:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:37.579 00:39:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:37.579 00:39:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:37.579 00:05:37.579 real 0m0.031s 00:05:37.579 user 0m0.022s 00:05:37.579 sys 0m0.009s 00:05:37.579 00:39:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.579 00:39:30 -- common/autotest_common.sh@10 -- # set +x 00:05:37.579 ************************************ 00:05:37.579 END TEST accel_negative_buffers 00:05:37.579 ************************************ 00:05:37.579 Error: writing output failed: Broken pipe 00:05:37.579 00:39:30 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:37.579 00:39:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:37.579 00:39:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.579 00:39:30 -- common/autotest_common.sh@10 -- # set +x 00:05:37.837 ************************************ 00:05:37.837 START TEST accel_crc32c 00:05:37.837 ************************************ 00:05:37.837 00:39:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:37.837 00:39:30 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.837 00:39:30 -- accel/accel.sh@17 -- # local accel_module 00:05:37.837 00:39:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.837 00:39:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:37.837 00:39:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.837 00:39:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.837 00:39:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.837 00:39:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.837 00:39:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.837 00:39:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.837 00:39:30 -- accel/accel.sh@40 -- # local IFS=, 00:05:37.837 00:39:30 -- accel/accel.sh@41 -- # jq -r . 00:05:37.837 [2024-04-27 00:39:30.315240] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:37.837 [2024-04-27 00:39:30.315279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522290 ] 00:05:37.837 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.837 [2024-04-27 00:39:30.364964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.837 [2024-04-27 00:39:30.439101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.837 00:39:30 -- accel/accel.sh@20 -- # val= 00:05:37.837 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.837 00:39:30 -- accel/accel.sh@20 -- # val= 00:05:37.837 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.837 00:39:30 -- accel/accel.sh@20 -- # val=0x1 00:05:37.837 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.837 00:39:30 -- accel/accel.sh@20 -- # val= 00:05:37.837 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.837 00:39:30 -- accel/accel.sh@20 -- # val= 00:05:37.837 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.837 00:39:30 -- accel/accel.sh@20 -- # val=crc32c 00:05:37.837 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.837 00:39:30 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.837 00:39:30 -- accel/accel.sh@20 -- # val=32 00:05:37.837 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.837 00:39:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.837 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.837 00:39:30 -- accel/accel.sh@20 -- # val= 00:05:37.837 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.837 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.837 00:39:30 -- accel/accel.sh@20 -- # val=software 00:05:37.838 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.838 00:39:30 -- accel/accel.sh@22 -- # accel_module=software 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.838 00:39:30 -- accel/accel.sh@20 -- # val=32 00:05:37.838 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.838 00:39:30 -- accel/accel.sh@20 -- # val=32 00:05:37.838 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.838 00:39:30 -- accel/accel.sh@20 -- # val=1 00:05:37.838 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.838 00:39:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.838 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.838 00:39:30 -- accel/accel.sh@20 -- # val=Yes 00:05:37.838 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.838 00:39:30 -- accel/accel.sh@20 -- # val= 00:05:37.838 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:37.838 00:39:30 -- accel/accel.sh@20 -- # val= 00:05:37.838 00:39:30 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # IFS=: 00:05:37.838 00:39:30 -- accel/accel.sh@19 -- # read -r var val 00:05:39.210 00:39:31 -- accel/accel.sh@20 -- # val= 00:05:39.210 00:39:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.210 00:39:31 -- accel/accel.sh@19 -- # IFS=: 00:05:39.210 00:39:31 -- accel/accel.sh@19 -- # read -r var val 00:05:39.210 00:39:31 -- accel/accel.sh@20 -- # val= 00:05:39.210 00:39:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.210 00:39:31 -- accel/accel.sh@19 -- # IFS=: 00:05:39.210 00:39:31 -- accel/accel.sh@19 -- # read -r var val 00:05:39.210 00:39:31 -- accel/accel.sh@20 -- # val= 00:05:39.210 00:39:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.210 00:39:31 -- accel/accel.sh@19 -- # IFS=: 00:05:39.210 00:39:31 -- accel/accel.sh@19 -- # read -r var val 00:05:39.210 00:39:31 -- accel/accel.sh@20 -- # val= 00:05:39.210 00:39:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.210 00:39:31 -- accel/accel.sh@19 -- # IFS=: 00:05:39.210 00:39:31 -- accel/accel.sh@19 -- # read -r var val 00:05:39.210 00:39:31 -- accel/accel.sh@20 -- # val= 00:05:39.210 00:39:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.210 00:39:31 -- accel/accel.sh@19 -- # IFS=: 00:05:39.211 00:39:31 -- accel/accel.sh@19 -- # read -r var val 00:05:39.211 00:39:31 -- accel/accel.sh@20 -- # val= 00:05:39.211 00:39:31 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.211 00:39:31 -- accel/accel.sh@19 -- # IFS=: 00:05:39.211 00:39:31 -- accel/accel.sh@19 -- # read -r var val 00:05:39.211 00:39:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.211 00:39:31 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:39.211 00:39:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.211 00:05:39.211 real 0m1.340s 00:05:39.211 user 0m1.252s 00:05:39.211 sys 0m0.101s 00:05:39.211 00:39:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.211 00:39:31 -- common/autotest_common.sh@10 -- # set +x 00:05:39.211 ************************************ 00:05:39.211 END TEST accel_crc32c 00:05:39.211 ************************************ 00:05:39.211 00:39:31 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:39.211 00:39:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:39.211 00:39:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.211 00:39:31 -- common/autotest_common.sh@10 -- # set +x 00:05:39.211 ************************************ 00:05:39.211 START TEST accel_crc32c_C2 00:05:39.211 ************************************ 00:05:39.211 00:39:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:39.211 00:39:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.211 00:39:31 -- accel/accel.sh@17 -- # local accel_module 00:05:39.211 00:39:31 -- accel/accel.sh@19 -- # IFS=: 00:05:39.211 00:39:31 -- accel/accel.sh@19 -- # read -r var val 00:05:39.211 00:39:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:39.211 00:39:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:39.211 00:39:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.211 00:39:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.211 00:39:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.211 00:39:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.211 00:39:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.211 00:39:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.211 00:39:31 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.211 00:39:31 -- accel/accel.sh@41 -- # jq -r . 00:05:39.211 [2024-04-27 00:39:31.826970] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:39.211 [2024-04-27 00:39:31.827039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522548 ] 00:05:39.211 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.211 [2024-04-27 00:39:31.885740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.469 [2024-04-27 00:39:31.965415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val= 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val= 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val=0x1 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val= 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val= 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val=crc32c 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val=0 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val= 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val=software 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@22 -- # accel_module=software 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val=32 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val=32 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val=1 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val=Yes 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val= 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:39.469 00:39:32 -- accel/accel.sh@20 -- # val= 00:05:39.469 00:39:32 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # IFS=: 00:05:39.469 00:39:32 -- accel/accel.sh@19 -- # read -r var val 00:05:40.841 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.841 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.841 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.841 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.841 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.841 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.841 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.841 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.841 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.841 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.841 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.841 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.841 00:39:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.841 00:39:33 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:40.841 00:39:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.841 00:05:40.841 real 0m1.367s 00:05:40.841 user 0m1.256s 00:05:40.841 sys 0m0.124s 00:05:40.841 00:39:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.841 00:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:40.841 ************************************ 00:05:40.841 END TEST accel_crc32c_C2 00:05:40.841 ************************************ 00:05:40.841 00:39:33 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:40.841 00:39:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:40.841 00:39:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.841 00:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:40.841 ************************************ 00:05:40.841 START TEST accel_copy 00:05:40.841 ************************************ 00:05:40.841 00:39:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:05:40.841 00:39:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.841 00:39:33 -- accel/accel.sh@17 -- # local accel_module 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.841 00:39:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:40.841 00:39:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:40.841 00:39:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.841 00:39:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.841 00:39:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.841 00:39:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.841 00:39:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.841 00:39:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.841 00:39:33 -- accel/accel.sh@40 -- # local IFS=, 00:05:40.841 00:39:33 -- accel/accel.sh@41 -- # jq -r . 00:05:40.841 [2024-04-27 00:39:33.354009] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:40.841 [2024-04-27 00:39:33.354081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522872 ] 00:05:40.841 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.841 [2024-04-27 00:39:33.411308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.841 [2024-04-27 00:39:33.487583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.841 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.841 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.841 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.841 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.841 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.841 00:39:33 -- accel/accel.sh@20 -- # val=0x1 00:05:40.842 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.842 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.842 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.842 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.842 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.842 00:39:33 -- accel/accel.sh@20 -- # val=copy 00:05:40.842 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.842 00:39:33 -- accel/accel.sh@23 -- # accel_opc=copy 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.842 00:39:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.842 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:40.842 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:40.842 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:40.842 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:41.099 00:39:33 -- accel/accel.sh@20 -- # val=software 00:05:41.099 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.099 00:39:33 -- accel/accel.sh@22 -- # accel_module=software 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:41.099 00:39:33 -- accel/accel.sh@20 -- # val=32 00:05:41.099 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:41.099 00:39:33 -- accel/accel.sh@20 -- # val=32 00:05:41.099 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:41.099 00:39:33 -- accel/accel.sh@20 -- # val=1 00:05:41.099 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:41.099 00:39:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.099 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:41.099 00:39:33 -- accel/accel.sh@20 -- # val=Yes 00:05:41.099 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:41.099 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:41.099 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:41.099 00:39:33 -- accel/accel.sh@20 -- # val= 00:05:41.099 00:39:33 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # IFS=: 00:05:41.099 00:39:33 -- accel/accel.sh@19 -- # read -r var val 00:05:42.033 00:39:34 -- accel/accel.sh@20 -- # val= 00:05:42.033 00:39:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # IFS=: 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # read -r var val 00:05:42.033 00:39:34 -- accel/accel.sh@20 -- # val= 00:05:42.033 00:39:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # IFS=: 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # read -r var val 00:05:42.033 00:39:34 -- accel/accel.sh@20 -- # val= 00:05:42.033 00:39:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # IFS=: 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # read -r var val 00:05:42.033 00:39:34 -- accel/accel.sh@20 -- # val= 00:05:42.033 00:39:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # IFS=: 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # read -r var val 00:05:42.033 00:39:34 -- accel/accel.sh@20 -- # val= 00:05:42.033 00:39:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # IFS=: 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # read -r var val 00:05:42.033 00:39:34 -- accel/accel.sh@20 -- # val= 00:05:42.033 00:39:34 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # IFS=: 00:05:42.033 00:39:34 -- accel/accel.sh@19 -- # read -r var val 00:05:42.033 00:39:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.033 00:39:34 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:42.033 00:39:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.033 00:05:42.033 real 0m1.361s 00:05:42.033 user 0m1.257s 00:05:42.033 sys 0m0.115s 00:05:42.033 00:39:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.033 00:39:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.033 ************************************ 00:05:42.033 END TEST accel_copy 00:05:42.033 ************************************ 00:05:42.033 00:39:34 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.033 00:39:34 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:05:42.033 00:39:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.033 00:39:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.291 ************************************ 00:05:42.291 START TEST accel_fill 00:05:42.291 ************************************ 00:05:42.291 00:39:34 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.291 00:39:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.291 00:39:34 -- accel/accel.sh@17 -- # local accel_module 00:05:42.291 00:39:34 -- accel/accel.sh@19 -- # IFS=: 00:05:42.291 00:39:34 -- accel/accel.sh@19 -- # read -r var val 00:05:42.291 00:39:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.291 00:39:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.291 00:39:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.291 00:39:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.291 00:39:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.291 00:39:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.291 00:39:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.291 00:39:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.291 00:39:34 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.291 00:39:34 -- accel/accel.sh@41 -- # jq -r . 00:05:42.291 [2024-04-27 00:39:34.873846] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:42.291 [2024-04-27 00:39:34.873914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523205 ] 00:05:42.291 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.291 [2024-04-27 00:39:34.929863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.549 [2024-04-27 00:39:35.004117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.549 00:39:35 -- accel/accel.sh@20 -- # val= 00:05:42.549 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.549 00:39:35 -- accel/accel.sh@20 -- # val= 00:05:42.549 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.549 00:39:35 -- accel/accel.sh@20 -- # val=0x1 00:05:42.549 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.549 00:39:35 -- accel/accel.sh@20 -- # val= 00:05:42.549 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.549 00:39:35 -- accel/accel.sh@20 -- # val= 00:05:42.549 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.549 00:39:35 -- accel/accel.sh@20 -- # val=fill 00:05:42.549 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.549 00:39:35 -- accel/accel.sh@23 -- # accel_opc=fill 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.549 00:39:35 -- accel/accel.sh@20 -- # val=0x80 00:05:42.549 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.549 00:39:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.549 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.549 00:39:35 -- accel/accel.sh@20 -- # val= 00:05:42.549 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.549 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.549 00:39:35 -- accel/accel.sh@20 -- # val=software 00:05:42.549 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.550 00:39:35 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.550 00:39:35 -- accel/accel.sh@20 -- # val=64 00:05:42.550 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.550 00:39:35 -- accel/accel.sh@20 -- # val=64 00:05:42.550 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.550 00:39:35 -- accel/accel.sh@20 -- # val=1 00:05:42.550 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.550 00:39:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.550 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.550 00:39:35 -- accel/accel.sh@20 -- # val=Yes 00:05:42.550 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.550 00:39:35 -- accel/accel.sh@20 -- # val= 00:05:42.550 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:42.550 00:39:35 -- accel/accel.sh@20 -- # val= 00:05:42.550 00:39:35 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # IFS=: 00:05:42.550 00:39:35 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.923 00:39:36 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:43.923 00:39:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.923 00:05:43.923 real 0m1.359s 00:05:43.923 user 0m1.250s 00:05:43.923 sys 0m0.121s 00:05:43.923 00:39:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.923 00:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.923 ************************************ 00:05:43.923 END TEST accel_fill 00:05:43.923 ************************************ 00:05:43.923 00:39:36 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:43.923 00:39:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:43.923 00:39:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.923 00:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.923 ************************************ 00:05:43.923 START TEST accel_copy_crc32c 00:05:43.923 ************************************ 00:05:43.923 00:39:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:05:43.923 00:39:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.923 00:39:36 -- accel/accel.sh@17 -- # local accel_module 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:43.923 00:39:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:43.923 00:39:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.923 00:39:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.923 00:39:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.923 00:39:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.923 00:39:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.923 00:39:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.923 00:39:36 -- accel/accel.sh@40 -- # local IFS=, 00:05:43.923 00:39:36 -- accel/accel.sh@41 -- # jq -r . 00:05:43.923 [2024-04-27 00:39:36.384628] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:43.923 [2024-04-27 00:39:36.384681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523532 ] 00:05:43.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.923 [2024-04-27 00:39:36.439962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.923 [2024-04-27 00:39:36.510854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val=0x1 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val=0 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val=software 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@22 -- # accel_module=software 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val=32 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val=32 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val=1 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val=Yes 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:43.923 00:39:36 -- accel/accel.sh@20 -- # val= 00:05:43.923 00:39:36 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # IFS=: 00:05:43.923 00:39:36 -- accel/accel.sh@19 -- # read -r var val 00:05:45.298 00:39:37 -- accel/accel.sh@20 -- # val= 00:05:45.298 00:39:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # IFS=: 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # read -r var val 00:05:45.298 00:39:37 -- accel/accel.sh@20 -- # val= 00:05:45.298 00:39:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # IFS=: 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # read -r var val 00:05:45.298 00:39:37 -- accel/accel.sh@20 -- # val= 00:05:45.298 00:39:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # IFS=: 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # read -r var val 00:05:45.298 00:39:37 -- accel/accel.sh@20 -- # val= 00:05:45.298 00:39:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # IFS=: 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # read -r var val 00:05:45.298 00:39:37 -- accel/accel.sh@20 -- # val= 00:05:45.298 00:39:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # IFS=: 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # read -r var val 00:05:45.298 00:39:37 -- accel/accel.sh@20 -- # val= 00:05:45.298 00:39:37 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # IFS=: 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # read -r var val 00:05:45.298 00:39:37 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.298 00:39:37 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:45.298 00:39:37 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.298 00:05:45.298 real 0m1.354s 00:05:45.298 user 0m1.256s 00:05:45.298 sys 0m0.111s 00:05:45.298 00:39:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.298 00:39:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.298 ************************************ 00:05:45.298 END TEST accel_copy_crc32c 00:05:45.298 ************************************ 00:05:45.298 00:39:37 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:45.298 00:39:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:45.298 00:39:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.298 00:39:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.298 ************************************ 00:05:45.298 START TEST accel_copy_crc32c_C2 00:05:45.298 ************************************ 00:05:45.298 00:39:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:45.298 00:39:37 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.298 00:39:37 -- accel/accel.sh@17 -- # local accel_module 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # IFS=: 00:05:45.298 00:39:37 -- accel/accel.sh@19 -- # read -r var val 00:05:45.298 00:39:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:45.298 00:39:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:45.298 00:39:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.298 00:39:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.298 00:39:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.298 00:39:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.298 00:39:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.298 00:39:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.298 00:39:37 -- accel/accel.sh@40 -- # local IFS=, 00:05:45.298 00:39:37 -- accel/accel.sh@41 -- # jq -r . 00:05:45.298 [2024-04-27 00:39:37.906037] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:45.298 [2024-04-27 00:39:37.906124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523790 ] 00:05:45.298 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.298 [2024-04-27 00:39:37.963683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.556 [2024-04-27 00:39:38.040991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val= 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val= 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val=0x1 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val= 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val= 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val=0 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val= 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val=software 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@22 -- # accel_module=software 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val=32 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val=32 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val=1 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val=Yes 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val= 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:45.556 00:39:38 -- accel/accel.sh@20 -- # val= 00:05:45.556 00:39:38 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # IFS=: 00:05:45.556 00:39:38 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.930 00:39:39 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:46.930 00:39:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.930 00:05:46.930 real 0m1.365s 00:05:46.930 user 0m1.255s 00:05:46.930 sys 0m0.123s 00:05:46.930 00:39:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:46.930 00:39:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.930 ************************************ 00:05:46.930 END TEST accel_copy_crc32c_C2 00:05:46.930 ************************************ 00:05:46.930 00:39:39 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:46.930 00:39:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:46.930 00:39:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.930 00:39:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.930 ************************************ 00:05:46.930 START TEST accel_dualcast 00:05:46.930 ************************************ 00:05:46.930 00:39:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:05:46.930 00:39:39 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.930 00:39:39 -- accel/accel.sh@17 -- # local accel_module 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:46.930 00:39:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.930 00:39:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:46.930 00:39:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.930 00:39:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.930 00:39:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.930 00:39:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.930 00:39:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.930 00:39:39 -- accel/accel.sh@40 -- # local IFS=, 00:05:46.930 00:39:39 -- accel/accel.sh@41 -- # jq -r . 00:05:46.930 [2024-04-27 00:39:39.418812] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:46.930 [2024-04-27 00:39:39.418874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524048 ] 00:05:46.930 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.930 [2024-04-27 00:39:39.476721] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.930 [2024-04-27 00:39:39.548810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val=0x1 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val=dualcast 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val=software 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@22 -- # accel_module=software 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val=32 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val=32 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.930 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.930 00:39:39 -- accel/accel.sh@20 -- # val=1 00:05:46.930 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.931 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.931 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.931 00:39:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.931 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.931 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.931 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.931 00:39:39 -- accel/accel.sh@20 -- # val=Yes 00:05:46.931 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.931 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.931 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.931 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.931 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.931 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.931 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:46.931 00:39:39 -- accel/accel.sh@20 -- # val= 00:05:46.931 00:39:39 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.931 00:39:39 -- accel/accel.sh@19 -- # IFS=: 00:05:46.931 00:39:39 -- accel/accel.sh@19 -- # read -r var val 00:05:48.301 00:39:40 -- accel/accel.sh@20 -- # val= 00:05:48.301 00:39:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # IFS=: 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # read -r var val 00:05:48.301 00:39:40 -- accel/accel.sh@20 -- # val= 00:05:48.301 00:39:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # IFS=: 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # read -r var val 00:05:48.301 00:39:40 -- accel/accel.sh@20 -- # val= 00:05:48.301 00:39:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # IFS=: 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # read -r var val 00:05:48.301 00:39:40 -- accel/accel.sh@20 -- # val= 00:05:48.301 00:39:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # IFS=: 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # read -r var val 00:05:48.301 00:39:40 -- accel/accel.sh@20 -- # val= 00:05:48.301 00:39:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # IFS=: 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # read -r var val 00:05:48.301 00:39:40 -- accel/accel.sh@20 -- # val= 00:05:48.301 00:39:40 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # IFS=: 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # read -r var val 00:05:48.301 00:39:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.301 00:39:40 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:48.301 00:39:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.301 00:05:48.301 real 0m1.360s 00:05:48.301 user 0m1.267s 00:05:48.301 sys 0m0.106s 00:05:48.301 00:39:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.301 00:39:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.301 ************************************ 00:05:48.301 END TEST accel_dualcast 00:05:48.301 ************************************ 00:05:48.301 00:39:40 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:48.301 00:39:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:48.301 00:39:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.301 00:39:40 -- common/autotest_common.sh@10 -- # set +x 00:05:48.301 ************************************ 00:05:48.301 START TEST accel_compare 00:05:48.301 ************************************ 00:05:48.301 00:39:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:05:48.301 00:39:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:48.301 00:39:40 -- accel/accel.sh@17 -- # local accel_module 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # IFS=: 00:05:48.301 00:39:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:48.301 00:39:40 -- accel/accel.sh@19 -- # read -r var val 00:05:48.301 00:39:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:48.301 00:39:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.301 00:39:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.301 00:39:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.301 00:39:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.301 00:39:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.301 00:39:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.301 00:39:40 -- accel/accel.sh@40 -- # local IFS=, 00:05:48.301 00:39:40 -- accel/accel.sh@41 -- # jq -r . 00:05:48.301 [2024-04-27 00:39:40.925884] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:48.301 [2024-04-27 00:39:40.925933] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524301 ] 00:05:48.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.301 [2024-04-27 00:39:40.979800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.558 [2024-04-27 00:39:41.056015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val= 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val= 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val=0x1 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val= 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val= 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val=compare 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@23 -- # accel_opc=compare 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val= 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val=software 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@22 -- # accel_module=software 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val=32 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val=32 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val=1 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val=Yes 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val= 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:48.558 00:39:41 -- accel/accel.sh@20 -- # val= 00:05:48.558 00:39:41 -- accel/accel.sh@21 -- # case "$var" in 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # IFS=: 00:05:48.558 00:39:41 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.931 00:39:42 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:49.931 00:39:42 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.931 00:05:49.931 real 0m1.350s 00:05:49.931 user 0m1.250s 00:05:49.931 sys 0m0.112s 00:05:49.931 00:39:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.931 00:39:42 -- common/autotest_common.sh@10 -- # set +x 00:05:49.931 ************************************ 00:05:49.931 END TEST accel_compare 00:05:49.931 ************************************ 00:05:49.931 00:39:42 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:49.931 00:39:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:05:49.931 00:39:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.931 00:39:42 -- common/autotest_common.sh@10 -- # set +x 00:05:49.931 ************************************ 00:05:49.931 START TEST accel_xor 00:05:49.931 ************************************ 00:05:49.931 00:39:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:05:49.931 00:39:42 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.931 00:39:42 -- accel/accel.sh@17 -- # local accel_module 00:05:49.931 00:39:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:49.931 00:39:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.931 00:39:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.931 00:39:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.931 00:39:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.931 00:39:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.931 00:39:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.931 00:39:42 -- accel/accel.sh@40 -- # local IFS=, 00:05:49.931 00:39:42 -- accel/accel.sh@41 -- # jq -r . 00:05:49.931 [2024-04-27 00:39:42.436390] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:49.931 [2024-04-27 00:39:42.436434] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524566 ] 00:05:49.931 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.931 [2024-04-27 00:39:42.486044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.931 [2024-04-27 00:39:42.561538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val=0x1 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val=xor 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.931 00:39:42 -- accel/accel.sh@20 -- # val=2 00:05:49.931 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.931 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.932 00:39:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.932 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.932 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.932 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.932 00:39:42 -- accel/accel.sh@20 -- # val=software 00:05:49.932 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.932 00:39:42 -- accel/accel.sh@22 -- # accel_module=software 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.932 00:39:42 -- accel/accel.sh@20 -- # val=32 00:05:49.932 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.932 00:39:42 -- accel/accel.sh@20 -- # val=32 00:05:49.932 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.932 00:39:42 -- accel/accel.sh@20 -- # val=1 00:05:49.932 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.932 00:39:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.932 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.932 00:39:42 -- accel/accel.sh@20 -- # val=Yes 00:05:49.932 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.932 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.932 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:49.932 00:39:42 -- accel/accel.sh@20 -- # val= 00:05:49.932 00:39:42 -- accel/accel.sh@21 -- # case "$var" in 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # IFS=: 00:05:49.932 00:39:42 -- accel/accel.sh@19 -- # read -r var val 00:05:51.302 00:39:43 -- accel/accel.sh@20 -- # val= 00:05:51.302 00:39:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # IFS=: 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # read -r var val 00:05:51.302 00:39:43 -- accel/accel.sh@20 -- # val= 00:05:51.302 00:39:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # IFS=: 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # read -r var val 00:05:51.302 00:39:43 -- accel/accel.sh@20 -- # val= 00:05:51.302 00:39:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # IFS=: 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # read -r var val 00:05:51.302 00:39:43 -- accel/accel.sh@20 -- # val= 00:05:51.302 00:39:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # IFS=: 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # read -r var val 00:05:51.302 00:39:43 -- accel/accel.sh@20 -- # val= 00:05:51.302 00:39:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # IFS=: 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # read -r var val 00:05:51.302 00:39:43 -- accel/accel.sh@20 -- # val= 00:05:51.302 00:39:43 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # IFS=: 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # read -r var val 00:05:51.302 00:39:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.302 00:39:43 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:51.302 00:39:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.302 00:05:51.302 real 0m1.342s 00:05:51.302 user 0m1.247s 00:05:51.302 sys 0m0.109s 00:05:51.302 00:39:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:51.302 00:39:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.302 ************************************ 00:05:51.302 END TEST accel_xor 00:05:51.302 ************************************ 00:05:51.302 00:39:43 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:51.302 00:39:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:51.302 00:39:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:51.302 00:39:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.302 ************************************ 00:05:51.302 START TEST accel_xor 00:05:51.302 ************************************ 00:05:51.302 00:39:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:05:51.302 00:39:43 -- accel/accel.sh@16 -- # local accel_opc 00:05:51.302 00:39:43 -- accel/accel.sh@17 -- # local accel_module 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # IFS=: 00:05:51.302 00:39:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:51.302 00:39:43 -- accel/accel.sh@19 -- # read -r var val 00:05:51.302 00:39:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.302 00:39:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:51.302 00:39:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.302 00:39:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.302 00:39:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.302 00:39:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.302 00:39:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.302 00:39:43 -- accel/accel.sh@40 -- # local IFS=, 00:05:51.302 00:39:43 -- accel/accel.sh@41 -- # jq -r . 00:05:51.302 [2024-04-27 00:39:43.934507] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:51.302 [2024-04-27 00:39:43.934552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524820 ] 00:05:51.302 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.302 [2024-04-27 00:39:43.988552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.560 [2024-04-27 00:39:44.060337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.560 00:39:44 -- accel/accel.sh@20 -- # val= 00:05:51.560 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.560 00:39:44 -- accel/accel.sh@20 -- # val= 00:05:51.560 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.560 00:39:44 -- accel/accel.sh@20 -- # val=0x1 00:05:51.560 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.560 00:39:44 -- accel/accel.sh@20 -- # val= 00:05:51.560 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.560 00:39:44 -- accel/accel.sh@20 -- # val= 00:05:51.560 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.560 00:39:44 -- accel/accel.sh@20 -- # val=xor 00:05:51.560 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.560 00:39:44 -- accel/accel.sh@23 -- # accel_opc=xor 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.560 00:39:44 -- accel/accel.sh@20 -- # val=3 00:05:51.560 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.560 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.561 00:39:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.561 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.561 00:39:44 -- accel/accel.sh@20 -- # val= 00:05:51.561 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.561 00:39:44 -- accel/accel.sh@20 -- # val=software 00:05:51.561 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.561 00:39:44 -- accel/accel.sh@22 -- # accel_module=software 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.561 00:39:44 -- accel/accel.sh@20 -- # val=32 00:05:51.561 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.561 00:39:44 -- accel/accel.sh@20 -- # val=32 00:05:51.561 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.561 00:39:44 -- accel/accel.sh@20 -- # val=1 00:05:51.561 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.561 00:39:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.561 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.561 00:39:44 -- accel/accel.sh@20 -- # val=Yes 00:05:51.561 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.561 00:39:44 -- accel/accel.sh@20 -- # val= 00:05:51.561 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:51.561 00:39:44 -- accel/accel.sh@20 -- # val= 00:05:51.561 00:39:44 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # IFS=: 00:05:51.561 00:39:44 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.932 00:39:45 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:52.932 00:39:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.932 00:05:52.932 real 0m1.350s 00:05:52.932 user 0m1.254s 00:05:52.932 sys 0m0.109s 00:05:52.932 00:39:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.932 00:39:45 -- common/autotest_common.sh@10 -- # set +x 00:05:52.932 ************************************ 00:05:52.932 END TEST accel_xor 00:05:52.932 ************************************ 00:05:52.932 00:39:45 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:52.932 00:39:45 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:52.932 00:39:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.932 00:39:45 -- common/autotest_common.sh@10 -- # set +x 00:05:52.932 ************************************ 00:05:52.932 START TEST accel_dif_verify 00:05:52.932 ************************************ 00:05:52.932 00:39:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:05:52.932 00:39:45 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.932 00:39:45 -- accel/accel.sh@17 -- # local accel_module 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:52.932 00:39:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.932 00:39:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:52.932 00:39:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.932 00:39:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.932 00:39:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.932 00:39:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.932 00:39:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.932 00:39:45 -- accel/accel.sh@40 -- # local IFS=, 00:05:52.932 00:39:45 -- accel/accel.sh@41 -- # jq -r . 00:05:52.932 [2024-04-27 00:39:45.431125] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:52.932 [2024-04-27 00:39:45.431171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525080 ] 00:05:52.932 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.932 [2024-04-27 00:39:45.485222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.932 [2024-04-27 00:39:45.557157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val=0x1 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val=dif_verify 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.932 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.932 00:39:45 -- accel/accel.sh@20 -- # val=software 00:05:52.932 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.933 00:39:45 -- accel/accel.sh@22 -- # accel_module=software 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.933 00:39:45 -- accel/accel.sh@20 -- # val=32 00:05:52.933 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.933 00:39:45 -- accel/accel.sh@20 -- # val=32 00:05:52.933 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.933 00:39:45 -- accel/accel.sh@20 -- # val=1 00:05:52.933 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.933 00:39:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.933 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.933 00:39:45 -- accel/accel.sh@20 -- # val=No 00:05:52.933 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.933 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.933 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:52.933 00:39:45 -- accel/accel.sh@20 -- # val= 00:05:52.933 00:39:45 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # IFS=: 00:05:52.933 00:39:45 -- accel/accel.sh@19 -- # read -r var val 00:05:54.305 00:39:46 -- accel/accel.sh@20 -- # val= 00:05:54.305 00:39:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # IFS=: 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # read -r var val 00:05:54.305 00:39:46 -- accel/accel.sh@20 -- # val= 00:05:54.305 00:39:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # IFS=: 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # read -r var val 00:05:54.305 00:39:46 -- accel/accel.sh@20 -- # val= 00:05:54.305 00:39:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # IFS=: 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # read -r var val 00:05:54.305 00:39:46 -- accel/accel.sh@20 -- # val= 00:05:54.305 00:39:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # IFS=: 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # read -r var val 00:05:54.305 00:39:46 -- accel/accel.sh@20 -- # val= 00:05:54.305 00:39:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # IFS=: 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # read -r var val 00:05:54.305 00:39:46 -- accel/accel.sh@20 -- # val= 00:05:54.305 00:39:46 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # IFS=: 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # read -r var val 00:05:54.305 00:39:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.305 00:39:46 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:54.305 00:39:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.305 00:05:54.305 real 0m1.352s 00:05:54.305 user 0m1.256s 00:05:54.305 sys 0m0.110s 00:05:54.305 00:39:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.305 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:54.305 ************************************ 00:05:54.305 END TEST accel_dif_verify 00:05:54.305 ************************************ 00:05:54.305 00:39:46 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:54.305 00:39:46 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:54.305 00:39:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.305 00:39:46 -- common/autotest_common.sh@10 -- # set +x 00:05:54.305 ************************************ 00:05:54.305 START TEST accel_dif_generate 00:05:54.305 ************************************ 00:05:54.305 00:39:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:05:54.305 00:39:46 -- accel/accel.sh@16 -- # local accel_opc 00:05:54.305 00:39:46 -- accel/accel.sh@17 -- # local accel_module 00:05:54.305 00:39:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # IFS=: 00:05:54.305 00:39:46 -- accel/accel.sh@19 -- # read -r var val 00:05:54.305 00:39:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:54.305 00:39:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.305 00:39:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.305 00:39:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.305 00:39:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.305 00:39:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.305 00:39:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.305 00:39:46 -- accel/accel.sh@40 -- # local IFS=, 00:05:54.305 00:39:46 -- accel/accel.sh@41 -- # jq -r . 00:05:54.305 [2024-04-27 00:39:46.922455] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:54.305 [2024-04-27 00:39:46.922502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525347 ] 00:05:54.305 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.305 [2024-04-27 00:39:46.971457] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.563 [2024-04-27 00:39:47.044951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.563 00:39:47 -- accel/accel.sh@20 -- # val= 00:05:54.563 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.563 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.563 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.563 00:39:47 -- accel/accel.sh@20 -- # val= 00:05:54.563 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.563 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.563 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.563 00:39:47 -- accel/accel.sh@20 -- # val=0x1 00:05:54.563 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.563 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.563 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.563 00:39:47 -- accel/accel.sh@20 -- # val= 00:05:54.563 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.563 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.563 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.563 00:39:47 -- accel/accel.sh@20 -- # val= 00:05:54.563 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.563 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val=dif_generate 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val='512 bytes' 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val='8 bytes' 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val= 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val=software 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@22 -- # accel_module=software 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val=32 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val=32 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val=1 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val=No 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val= 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:54.564 00:39:47 -- accel/accel.sh@20 -- # val= 00:05:54.564 00:39:47 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # IFS=: 00:05:54.564 00:39:47 -- accel/accel.sh@19 -- # read -r var val 00:05:55.938 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.938 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.938 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.938 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.938 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.938 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.938 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.938 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.938 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.938 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.938 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.938 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.938 00:39:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.938 00:39:48 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:55.938 00:39:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.938 00:05:55.938 real 0m1.340s 00:05:55.938 user 0m1.255s 00:05:55.938 sys 0m0.098s 00:05:55.938 00:39:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.938 00:39:48 -- common/autotest_common.sh@10 -- # set +x 00:05:55.938 ************************************ 00:05:55.938 END TEST accel_dif_generate 00:05:55.938 ************************************ 00:05:55.938 00:39:48 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:55.938 00:39:48 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:55.938 00:39:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.938 00:39:48 -- common/autotest_common.sh@10 -- # set +x 00:05:55.938 ************************************ 00:05:55.938 START TEST accel_dif_generate_copy 00:05:55.938 ************************************ 00:05:55.938 00:39:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:05:55.938 00:39:48 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.938 00:39:48 -- accel/accel.sh@17 -- # local accel_module 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.938 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.938 00:39:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:55.938 00:39:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.938 00:39:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:55.938 00:39:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.938 00:39:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.938 00:39:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.938 00:39:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.938 00:39:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.939 00:39:48 -- accel/accel.sh@40 -- # local IFS=, 00:05:55.939 00:39:48 -- accel/accel.sh@41 -- # jq -r . 00:05:55.939 [2024-04-27 00:39:48.429695] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:55.939 [2024-04-27 00:39:48.429762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525666 ] 00:05:55.939 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.939 [2024-04-27 00:39:48.487758] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.939 [2024-04-27 00:39:48.561491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val=0x1 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val=software 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@22 -- # accel_module=software 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val=32 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val=32 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val=1 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val=No 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:55.939 00:39:48 -- accel/accel.sh@20 -- # val= 00:05:55.939 00:39:48 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # IFS=: 00:05:55.939 00:39:48 -- accel/accel.sh@19 -- # read -r var val 00:05:57.312 00:39:49 -- accel/accel.sh@20 -- # val= 00:05:57.312 00:39:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # IFS=: 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # read -r var val 00:05:57.312 00:39:49 -- accel/accel.sh@20 -- # val= 00:05:57.312 00:39:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # IFS=: 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # read -r var val 00:05:57.312 00:39:49 -- accel/accel.sh@20 -- # val= 00:05:57.312 00:39:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # IFS=: 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # read -r var val 00:05:57.312 00:39:49 -- accel/accel.sh@20 -- # val= 00:05:57.312 00:39:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # IFS=: 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # read -r var val 00:05:57.312 00:39:49 -- accel/accel.sh@20 -- # val= 00:05:57.312 00:39:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # IFS=: 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # read -r var val 00:05:57.312 00:39:49 -- accel/accel.sh@20 -- # val= 00:05:57.312 00:39:49 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # IFS=: 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # read -r var val 00:05:57.312 00:39:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.312 00:39:49 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:57.312 00:39:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.312 00:05:57.312 real 0m1.363s 00:05:57.312 user 0m1.256s 00:05:57.312 sys 0m0.120s 00:05:57.312 00:39:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.312 00:39:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.312 ************************************ 00:05:57.312 END TEST accel_dif_generate_copy 00:05:57.312 ************************************ 00:05:57.312 00:39:49 -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:57.312 00:39:49 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.312 00:39:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:05:57.312 00:39:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.312 00:39:49 -- common/autotest_common.sh@10 -- # set +x 00:05:57.312 ************************************ 00:05:57.312 START TEST accel_comp 00:05:57.312 ************************************ 00:05:57.312 00:39:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.312 00:39:49 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.312 00:39:49 -- accel/accel.sh@17 -- # local accel_module 00:05:57.312 00:39:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # IFS=: 00:05:57.312 00:39:49 -- accel/accel.sh@19 -- # read -r var val 00:05:57.312 00:39:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.312 00:39:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.312 00:39:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.312 00:39:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.312 00:39:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.312 00:39:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.312 00:39:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.312 00:39:49 -- accel/accel.sh@40 -- # local IFS=, 00:05:57.312 00:39:49 -- accel/accel.sh@41 -- # jq -r . 00:05:57.312 [2024-04-27 00:39:49.944456] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:57.312 [2024-04-27 00:39:49.944517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526006 ] 00:05:57.312 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.312 [2024-04-27 00:39:49.999098] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.570 [2024-04-27 00:39:50.083387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val= 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val= 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val= 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val=0x1 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val= 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val= 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val=compress 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@23 -- # accel_opc=compress 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val= 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val=software 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@22 -- # accel_module=software 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val=32 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val=32 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val=1 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val=No 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val= 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:57.570 00:39:50 -- accel/accel.sh@20 -- # val= 00:05:57.570 00:39:50 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # IFS=: 00:05:57.570 00:39:50 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:58.945 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:58.945 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:58.945 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:58.945 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:58.945 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:58.945 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.945 00:39:51 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:58.945 00:39:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.945 00:05:58.945 real 0m1.359s 00:05:58.945 user 0m1.262s 00:05:58.945 sys 0m0.111s 00:05:58.945 00:39:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.945 00:39:51 -- common/autotest_common.sh@10 -- # set +x 00:05:58.945 ************************************ 00:05:58.945 END TEST accel_comp 00:05:58.945 ************************************ 00:05:58.945 00:39:51 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:58.945 00:39:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:05:58.945 00:39:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.945 00:39:51 -- common/autotest_common.sh@10 -- # set +x 00:05:58.945 ************************************ 00:05:58.945 START TEST accel_decomp 00:05:58.945 ************************************ 00:05:58.945 00:39:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:58.945 00:39:51 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.945 00:39:51 -- accel/accel.sh@17 -- # local accel_module 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:58.945 00:39:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:58.945 00:39:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.945 00:39:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.945 00:39:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.945 00:39:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.945 00:39:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.945 00:39:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.945 00:39:51 -- accel/accel.sh@40 -- # local IFS=, 00:05:58.945 00:39:51 -- accel/accel.sh@41 -- # jq -r . 00:05:58.945 [2024-04-27 00:39:51.467870] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:58.945 [2024-04-27 00:39:51.467924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526322 ] 00:05:58.945 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.945 [2024-04-27 00:39:51.521816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.945 [2024-04-27 00:39:51.592542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.945 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:58.945 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:58.945 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:58.945 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:58.945 00:39:51 -- accel/accel.sh@20 -- # val=0x1 00:05:58.945 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:58.945 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val=decompress 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val=software 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@22 -- # accel_module=software 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val=32 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val=32 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val=1 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val=Yes 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:05:59.203 00:39:51 -- accel/accel.sh@20 -- # val= 00:05:59.203 00:39:51 -- accel/accel.sh@21 -- # case "$var" in 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # IFS=: 00:05:59.203 00:39:51 -- accel/accel.sh@19 -- # read -r var val 00:06:00.137 00:39:52 -- accel/accel.sh@20 -- # val= 00:06:00.137 00:39:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # IFS=: 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # read -r var val 00:06:00.137 00:39:52 -- accel/accel.sh@20 -- # val= 00:06:00.137 00:39:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # IFS=: 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # read -r var val 00:06:00.137 00:39:52 -- accel/accel.sh@20 -- # val= 00:06:00.137 00:39:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # IFS=: 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # read -r var val 00:06:00.137 00:39:52 -- accel/accel.sh@20 -- # val= 00:06:00.137 00:39:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # IFS=: 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # read -r var val 00:06:00.137 00:39:52 -- accel/accel.sh@20 -- # val= 00:06:00.137 00:39:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # IFS=: 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # read -r var val 00:06:00.137 00:39:52 -- accel/accel.sh@20 -- # val= 00:06:00.137 00:39:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # IFS=: 00:06:00.137 00:39:52 -- accel/accel.sh@19 -- # read -r var val 00:06:00.137 00:39:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.137 00:39:52 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.137 00:39:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.137 00:06:00.137 real 0m1.355s 00:06:00.137 user 0m1.254s 00:06:00.137 sys 0m0.115s 00:06:00.137 00:39:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.137 00:39:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.137 ************************************ 00:06:00.137 END TEST accel_decomp 00:06:00.137 ************************************ 00:06:00.137 00:39:52 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.137 00:39:52 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:00.137 00:39:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.137 00:39:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.396 ************************************ 00:06:00.396 START TEST accel_decmop_full 00:06:00.396 ************************************ 00:06:00.396 00:39:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.396 00:39:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.396 00:39:52 -- accel/accel.sh@17 -- # local accel_module 00:06:00.396 00:39:52 -- accel/accel.sh@19 -- # IFS=: 00:06:00.396 00:39:52 -- accel/accel.sh@19 -- # read -r var val 00:06:00.396 00:39:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.396 00:39:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.396 00:39:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.396 00:39:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.396 00:39:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.396 00:39:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.396 00:39:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.396 00:39:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.396 00:39:52 -- accel/accel.sh@40 -- # local IFS=, 00:06:00.396 00:39:52 -- accel/accel.sh@41 -- # jq -r . 00:06:00.396 [2024-04-27 00:39:52.962939] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:00.396 [2024-04-27 00:39:52.962985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526577 ] 00:06:00.396 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.396 [2024-04-27 00:39:53.016474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.396 [2024-04-27 00:39:53.087029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val= 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val= 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val= 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val=0x1 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val= 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val= 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val=decompress 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val= 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val=software 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@22 -- # accel_module=software 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val=32 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val=32 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val=1 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val=Yes 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val= 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:00.655 00:39:53 -- accel/accel.sh@20 -- # val= 00:06:00.655 00:39:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # IFS=: 00:06:00.655 00:39:53 -- accel/accel.sh@19 -- # read -r var val 00:06:02.032 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.032 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.032 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.032 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.032 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.032 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.032 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.032 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.032 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.032 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.032 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.032 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.032 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.032 00:39:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.032 00:39:54 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.032 00:39:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.032 00:06:02.032 real 0m1.365s 00:06:02.032 user 0m1.268s 00:06:02.032 sys 0m0.110s 00:06:02.032 00:39:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.032 00:39:54 -- common/autotest_common.sh@10 -- # set +x 00:06:02.032 ************************************ 00:06:02.032 END TEST accel_decmop_full 00:06:02.032 ************************************ 00:06:02.032 00:39:54 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:02.032 00:39:54 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:02.032 00:39:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.032 00:39:54 -- common/autotest_common.sh@10 -- # set +x 00:06:02.032 ************************************ 00:06:02.032 START TEST accel_decomp_mcore 00:06:02.032 ************************************ 00:06:02.032 00:39:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:02.033 00:39:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.033 00:39:54 -- accel/accel.sh@17 -- # local accel_module 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:02.033 00:39:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.033 00:39:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.033 00:39:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.033 00:39:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.033 00:39:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.033 00:39:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.033 00:39:54 -- accel/accel.sh@40 -- # local IFS=, 00:06:02.033 00:39:54 -- accel/accel.sh@41 -- # jq -r . 00:06:02.033 [2024-04-27 00:39:54.477963] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:02.033 [2024-04-27 00:39:54.478019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526837 ] 00:06:02.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.033 [2024-04-27 00:39:54.532458] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.033 [2024-04-27 00:39:54.605353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.033 [2024-04-27 00:39:54.605452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.033 [2024-04-27 00:39:54.605524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.033 [2024-04-27 00:39:54.605526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val=0xf 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val=decompress 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val=software 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@22 -- # accel_module=software 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val=32 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val=32 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val=1 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val=Yes 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:02.033 00:39:54 -- accel/accel.sh@20 -- # val= 00:06:02.033 00:39:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # IFS=: 00:06:02.033 00:39:54 -- accel/accel.sh@19 -- # read -r var val 00:06:03.410 00:39:55 -- accel/accel.sh@20 -- # val= 00:06:03.410 00:39:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.410 00:39:55 -- accel/accel.sh@19 -- # IFS=: 00:06:03.410 00:39:55 -- accel/accel.sh@19 -- # read -r var val 00:06:03.410 00:39:55 -- accel/accel.sh@20 -- # val= 00:06:03.410 00:39:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.410 00:39:55 -- accel/accel.sh@19 -- # IFS=: 00:06:03.410 00:39:55 -- accel/accel.sh@19 -- # read -r var val 00:06:03.410 00:39:55 -- accel/accel.sh@20 -- # val= 00:06:03.410 00:39:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.410 00:39:55 -- accel/accel.sh@19 -- # IFS=: 00:06:03.410 00:39:55 -- accel/accel.sh@19 -- # read -r var val 00:06:03.410 00:39:55 -- accel/accel.sh@20 -- # val= 00:06:03.410 00:39:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.410 00:39:55 -- accel/accel.sh@19 -- # IFS=: 00:06:03.410 00:39:55 -- accel/accel.sh@19 -- # read -r var val 00:06:03.410 00:39:55 -- accel/accel.sh@20 -- # val= 00:06:03.411 00:39:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # IFS=: 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # read -r var val 00:06:03.411 00:39:55 -- accel/accel.sh@20 -- # val= 00:06:03.411 00:39:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # IFS=: 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # read -r var val 00:06:03.411 00:39:55 -- accel/accel.sh@20 -- # val= 00:06:03.411 00:39:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # IFS=: 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # read -r var val 00:06:03.411 00:39:55 -- accel/accel.sh@20 -- # val= 00:06:03.411 00:39:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # IFS=: 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # read -r var val 00:06:03.411 00:39:55 -- accel/accel.sh@20 -- # val= 00:06:03.411 00:39:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # IFS=: 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # read -r var val 00:06:03.411 00:39:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.411 00:39:55 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.411 00:39:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.411 00:06:03.411 real 0m1.362s 00:06:03.411 user 0m4.591s 00:06:03.411 sys 0m0.114s 00:06:03.411 00:39:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:03.411 00:39:55 -- common/autotest_common.sh@10 -- # set +x 00:06:03.411 ************************************ 00:06:03.411 END TEST accel_decomp_mcore 00:06:03.411 ************************************ 00:06:03.411 00:39:55 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:03.411 00:39:55 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:03.411 00:39:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.411 00:39:55 -- common/autotest_common.sh@10 -- # set +x 00:06:03.411 ************************************ 00:06:03.411 START TEST accel_decomp_full_mcore 00:06:03.411 ************************************ 00:06:03.411 00:39:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:03.411 00:39:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.411 00:39:55 -- accel/accel.sh@17 -- # local accel_module 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # IFS=: 00:06:03.411 00:39:55 -- accel/accel.sh@19 -- # read -r var val 00:06:03.411 00:39:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:03.411 00:39:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:03.411 00:39:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.411 00:39:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.411 00:39:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.411 00:39:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.411 00:39:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.411 00:39:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.411 00:39:55 -- accel/accel.sh@40 -- # local IFS=, 00:06:03.411 00:39:55 -- accel/accel.sh@41 -- # jq -r . 00:06:03.411 [2024-04-27 00:39:55.986309] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:03.411 [2024-04-27 00:39:55.986354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527093 ] 00:06:03.411 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.411 [2024-04-27 00:39:56.040834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.670 [2024-04-27 00:39:56.115750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.670 [2024-04-27 00:39:56.115818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.670 [2024-04-27 00:39:56.115902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.670 [2024-04-27 00:39:56.115904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val= 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val= 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val= 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val=0xf 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val= 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val= 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val=decompress 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val= 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val=software 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@22 -- # accel_module=software 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val=32 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val=32 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val=1 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val=Yes 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.670 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.670 00:39:56 -- accel/accel.sh@20 -- # val= 00:06:03.670 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.671 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.671 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:03.671 00:39:56 -- accel/accel.sh@20 -- # val= 00:06:03.671 00:39:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.671 00:39:56 -- accel/accel.sh@19 -- # IFS=: 00:06:03.671 00:39:56 -- accel/accel.sh@19 -- # read -r var val 00:06:05.046 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.046 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.046 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.046 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.046 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.046 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.046 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.046 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.046 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.046 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.046 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.046 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.046 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.046 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.046 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.047 00:39:57 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.047 00:39:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.047 00:06:05.047 real 0m1.379s 00:06:05.047 user 0m4.635s 00:06:05.047 sys 0m0.118s 00:06:05.047 00:39:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.047 00:39:57 -- common/autotest_common.sh@10 -- # set +x 00:06:05.047 ************************************ 00:06:05.047 END TEST accel_decomp_full_mcore 00:06:05.047 ************************************ 00:06:05.047 00:39:57 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:05.047 00:39:57 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:05.047 00:39:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.047 00:39:57 -- common/autotest_common.sh@10 -- # set +x 00:06:05.047 ************************************ 00:06:05.047 START TEST accel_decomp_mthread 00:06:05.047 ************************************ 00:06:05.047 00:39:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:05.047 00:39:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.047 00:39:57 -- accel/accel.sh@17 -- # local accel_module 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:05.047 00:39:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:05.047 00:39:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.047 00:39:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.047 00:39:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.047 00:39:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.047 00:39:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.047 00:39:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.047 00:39:57 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.047 00:39:57 -- accel/accel.sh@41 -- # jq -r . 00:06:05.047 [2024-04-27 00:39:57.524911] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:05.047 [2024-04-27 00:39:57.524956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527360 ] 00:06:05.047 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.047 [2024-04-27 00:39:57.579645] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.047 [2024-04-27 00:39:57.649746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val=0x1 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val=decompress 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val=software 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@22 -- # accel_module=software 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val=32 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val=32 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val=2 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val=Yes 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:05.047 00:39:57 -- accel/accel.sh@20 -- # val= 00:06:05.047 00:39:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # IFS=: 00:06:05.047 00:39:57 -- accel/accel.sh@19 -- # read -r var val 00:06:06.423 00:39:58 -- accel/accel.sh@20 -- # val= 00:06:06.423 00:39:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # IFS=: 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # read -r var val 00:06:06.423 00:39:58 -- accel/accel.sh@20 -- # val= 00:06:06.423 00:39:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # IFS=: 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # read -r var val 00:06:06.423 00:39:58 -- accel/accel.sh@20 -- # val= 00:06:06.423 00:39:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # IFS=: 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # read -r var val 00:06:06.423 00:39:58 -- accel/accel.sh@20 -- # val= 00:06:06.423 00:39:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # IFS=: 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # read -r var val 00:06:06.423 00:39:58 -- accel/accel.sh@20 -- # val= 00:06:06.423 00:39:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # IFS=: 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # read -r var val 00:06:06.423 00:39:58 -- accel/accel.sh@20 -- # val= 00:06:06.423 00:39:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # IFS=: 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # read -r var val 00:06:06.423 00:39:58 -- accel/accel.sh@20 -- # val= 00:06:06.423 00:39:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # IFS=: 00:06:06.423 00:39:58 -- accel/accel.sh@19 -- # read -r var val 00:06:06.423 00:39:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.423 00:39:58 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.423 00:39:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.423 00:06:06.423 real 0m1.355s 00:06:06.423 user 0m1.254s 00:06:06.423 sys 0m0.115s 00:06:06.423 00:39:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.423 00:39:58 -- common/autotest_common.sh@10 -- # set +x 00:06:06.423 ************************************ 00:06:06.423 END TEST accel_decomp_mthread 00:06:06.423 ************************************ 00:06:06.423 00:39:58 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:06.423 00:39:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:06.423 00:39:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.423 00:39:58 -- common/autotest_common.sh@10 -- # set +x 00:06:06.423 ************************************ 00:06:06.423 START TEST accel_deomp_full_mthread 00:06:06.423 ************************************ 00:06:06.423 00:39:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:06.423 00:39:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.423 00:39:58 -- accel/accel.sh@17 -- # local accel_module 00:06:06.423 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.423 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.423 00:39:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:06.423 00:39:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:06.423 00:39:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.423 00:39:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.423 00:39:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.423 00:39:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.423 00:39:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.423 00:39:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.423 00:39:59 -- accel/accel.sh@40 -- # local IFS=, 00:06:06.423 00:39:59 -- accel/accel.sh@41 -- # jq -r . 00:06:06.423 [2024-04-27 00:39:59.023795] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:06.423 [2024-04-27 00:39:59.023850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527615 ] 00:06:06.423 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.423 [2024-04-27 00:39:59.079344] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.683 [2024-04-27 00:39:59.150643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val= 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val= 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val= 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val=0x1 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val= 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val= 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val=decompress 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val= 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val=software 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@22 -- # accel_module=software 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val=32 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val=32 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val=2 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val=Yes 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val= 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:06.683 00:39:59 -- accel/accel.sh@20 -- # val= 00:06:06.683 00:39:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # IFS=: 00:06:06.683 00:39:59 -- accel/accel.sh@19 -- # read -r var val 00:06:08.083 00:40:00 -- accel/accel.sh@20 -- # val= 00:06:08.083 00:40:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # IFS=: 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # read -r var val 00:06:08.083 00:40:00 -- accel/accel.sh@20 -- # val= 00:06:08.083 00:40:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # IFS=: 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # read -r var val 00:06:08.083 00:40:00 -- accel/accel.sh@20 -- # val= 00:06:08.083 00:40:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # IFS=: 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # read -r var val 00:06:08.083 00:40:00 -- accel/accel.sh@20 -- # val= 00:06:08.083 00:40:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # IFS=: 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # read -r var val 00:06:08.083 00:40:00 -- accel/accel.sh@20 -- # val= 00:06:08.083 00:40:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # IFS=: 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # read -r var val 00:06:08.083 00:40:00 -- accel/accel.sh@20 -- # val= 00:06:08.083 00:40:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # IFS=: 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # read -r var val 00:06:08.083 00:40:00 -- accel/accel.sh@20 -- # val= 00:06:08.083 00:40:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # IFS=: 00:06:08.083 00:40:00 -- accel/accel.sh@19 -- # read -r var val 00:06:08.083 00:40:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.083 00:40:00 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:08.083 00:40:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.083 00:06:08.083 real 0m1.385s 00:06:08.083 user 0m1.280s 00:06:08.083 sys 0m0.117s 00:06:08.083 00:40:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.083 00:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:08.083 ************************************ 00:06:08.083 END TEST accel_deomp_full_mthread 00:06:08.083 ************************************ 00:06:08.083 00:40:00 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:08.083 00:40:00 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:08.083 00:40:00 -- accel/accel.sh@137 -- # build_accel_config 00:06:08.083 00:40:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:08.083 00:40:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.083 00:40:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.083 00:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:08.083 00:40:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.083 00:40:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.083 00:40:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.083 00:40:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.083 00:40:00 -- accel/accel.sh@40 -- # local IFS=, 00:06:08.083 00:40:00 -- accel/accel.sh@41 -- # jq -r . 00:06:08.083 ************************************ 00:06:08.083 START TEST accel_dif_functional_tests 00:06:08.083 ************************************ 00:06:08.083 00:40:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:08.083 [2024-04-27 00:40:00.582165] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:08.083 [2024-04-27 00:40:00.582205] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527876 ] 00:06:08.083 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.083 [2024-04-27 00:40:00.630729] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.083 [2024-04-27 00:40:00.704940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.083 [2024-04-27 00:40:00.705036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.083 [2024-04-27 00:40:00.705038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.374 00:06:08.374 00:06:08.374 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.374 http://cunit.sourceforge.net/ 00:06:08.374 00:06:08.374 00:06:08.374 Suite: accel_dif 00:06:08.374 Test: verify: DIF generated, GUARD check ...passed 00:06:08.374 Test: verify: DIF generated, APPTAG check ...passed 00:06:08.374 Test: verify: DIF generated, REFTAG check ...passed 00:06:08.374 Test: verify: DIF not generated, GUARD check ...[2024-04-27 00:40:00.774007] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:08.374 [2024-04-27 00:40:00.774047] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:08.374 passed 00:06:08.374 Test: verify: DIF not generated, APPTAG check ...[2024-04-27 00:40:00.774082] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:08.374 [2024-04-27 00:40:00.774097] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:08.374 passed 00:06:08.374 Test: verify: DIF not generated, REFTAG check ...[2024-04-27 00:40:00.774116] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:08.374 [2024-04-27 00:40:00.774131] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:08.374 passed 00:06:08.374 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:08.374 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-27 00:40:00.774171] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:08.374 passed 00:06:08.374 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:08.374 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:08.374 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:08.374 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-27 00:40:00.774269] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:08.374 passed 00:06:08.374 Test: generate copy: DIF generated, GUARD check ...passed 00:06:08.374 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:08.374 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:08.374 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:08.374 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:08.374 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:08.374 Test: generate copy: iovecs-len validate ...[2024-04-27 00:40:00.774435] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:08.374 passed 00:06:08.374 Test: generate copy: buffer alignment validate ...passed 00:06:08.374 00:06:08.374 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.374 suites 1 1 n/a 0 0 00:06:08.374 tests 20 20 20 0 0 00:06:08.374 asserts 204 204 204 0 n/a 00:06:08.374 00:06:08.374 Elapsed time = 0.002 seconds 00:06:08.374 00:06:08.374 real 0m0.417s 00:06:08.374 user 0m0.604s 00:06:08.374 sys 0m0.132s 00:06:08.374 00:40:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.374 00:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:08.374 ************************************ 00:06:08.374 END TEST accel_dif_functional_tests 00:06:08.374 ************************************ 00:06:08.374 00:06:08.374 real 0m33.720s 00:06:08.374 user 0m36.197s 00:06:08.374 sys 0m5.138s 00:06:08.374 00:40:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.374 00:40:01 -- common/autotest_common.sh@10 -- # set +x 00:06:08.374 ************************************ 00:06:08.374 END TEST accel 00:06:08.374 ************************************ 00:06:08.374 00:40:01 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:08.374 00:40:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.374 00:40:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.374 00:40:01 -- common/autotest_common.sh@10 -- # set +x 00:06:08.632 ************************************ 00:06:08.632 START TEST accel_rpc 00:06:08.632 ************************************ 00:06:08.632 00:40:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:08.632 * Looking for test storage... 00:06:08.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:08.632 00:40:01 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.632 00:40:01 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1528166 00:06:08.632 00:40:01 -- accel/accel_rpc.sh@15 -- # waitforlisten 1528166 00:06:08.632 00:40:01 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:08.632 00:40:01 -- common/autotest_common.sh@817 -- # '[' -z 1528166 ']' 00:06:08.632 00:40:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.632 00:40:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:08.632 00:40:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.632 00:40:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:08.632 00:40:01 -- common/autotest_common.sh@10 -- # set +x 00:06:08.632 [2024-04-27 00:40:01.300435] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:08.632 [2024-04-27 00:40:01.300490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528166 ] 00:06:08.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.891 [2024-04-27 00:40:01.354256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.891 [2024-04-27 00:40:01.432971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.458 00:40:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:09.458 00:40:02 -- common/autotest_common.sh@850 -- # return 0 00:06:09.458 00:40:02 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:09.458 00:40:02 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:09.458 00:40:02 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:09.458 00:40:02 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:09.458 00:40:02 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:09.458 00:40:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.458 00:40:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.458 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:06:09.716 ************************************ 00:06:09.716 START TEST accel_assign_opcode 00:06:09.716 ************************************ 00:06:09.716 00:40:02 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:09.716 00:40:02 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:09.716 00:40:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.716 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:06:09.716 [2024-04-27 00:40:02.211213] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:09.717 00:40:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.717 00:40:02 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:09.717 00:40:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.717 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:06:09.717 [2024-04-27 00:40:02.219228] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:09.717 00:40:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.717 00:40:02 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:09.717 00:40:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.717 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:06:09.717 00:40:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.717 00:40:02 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:09.717 00:40:02 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:09.717 00:40:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.717 00:40:02 -- accel/accel_rpc.sh@42 -- # grep software 00:06:09.717 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:06:09.976 00:40:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.976 software 00:06:09.976 00:06:09.976 real 0m0.239s 00:06:09.976 user 0m0.047s 00:06:09.976 sys 0m0.011s 00:06:09.976 00:40:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.976 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:06:09.976 ************************************ 00:06:09.976 END TEST accel_assign_opcode 00:06:09.976 ************************************ 00:06:09.976 00:40:02 -- accel/accel_rpc.sh@55 -- # killprocess 1528166 00:06:09.976 00:40:02 -- common/autotest_common.sh@936 -- # '[' -z 1528166 ']' 00:06:09.976 00:40:02 -- common/autotest_common.sh@940 -- # kill -0 1528166 00:06:09.976 00:40:02 -- common/autotest_common.sh@941 -- # uname 00:06:09.976 00:40:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.976 00:40:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1528166 00:06:09.976 00:40:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.976 00:40:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.976 00:40:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1528166' 00:06:09.976 killing process with pid 1528166 00:06:09.976 00:40:02 -- common/autotest_common.sh@955 -- # kill 1528166 00:06:09.976 00:40:02 -- common/autotest_common.sh@960 -- # wait 1528166 00:06:10.234 00:06:10.235 real 0m1.685s 00:06:10.235 user 0m1.804s 00:06:10.235 sys 0m0.440s 00:06:10.235 00:40:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:10.235 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:06:10.235 ************************************ 00:06:10.235 END TEST accel_rpc 00:06:10.235 ************************************ 00:06:10.235 00:40:02 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:10.235 00:40:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.235 00:40:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.235 00:40:02 -- common/autotest_common.sh@10 -- # set +x 00:06:10.493 ************************************ 00:06:10.493 START TEST app_cmdline 00:06:10.493 ************************************ 00:06:10.493 00:40:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:10.493 * Looking for test storage... 00:06:10.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:10.493 00:40:03 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:10.493 00:40:03 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1528490 00:06:10.493 00:40:03 -- app/cmdline.sh@18 -- # waitforlisten 1528490 00:06:10.493 00:40:03 -- common/autotest_common.sh@817 -- # '[' -z 1528490 ']' 00:06:10.493 00:40:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.493 00:40:03 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:10.493 00:40:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:10.493 00:40:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.493 00:40:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:10.493 00:40:03 -- common/autotest_common.sh@10 -- # set +x 00:06:10.493 [2024-04-27 00:40:03.151969] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:10.493 [2024-04-27 00:40:03.152013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528490 ] 00:06:10.493 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.751 [2024-04-27 00:40:03.205685] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.751 [2024-04-27 00:40:03.285284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.318 00:40:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.318 00:40:03 -- common/autotest_common.sh@850 -- # return 0 00:06:11.318 00:40:03 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:11.576 { 00:06:11.576 "version": "SPDK v24.05-pre git sha1 d4fbb5733", 00:06:11.576 "fields": { 00:06:11.576 "major": 24, 00:06:11.576 "minor": 5, 00:06:11.576 "patch": 0, 00:06:11.576 "suffix": "-pre", 00:06:11.576 "commit": "d4fbb5733" 00:06:11.576 } 00:06:11.576 } 00:06:11.576 00:40:04 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:11.576 00:40:04 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:11.576 00:40:04 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:11.576 00:40:04 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:11.576 00:40:04 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:11.576 00:40:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:11.576 00:40:04 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:11.576 00:40:04 -- app/cmdline.sh@26 -- # sort 00:06:11.576 00:40:04 -- common/autotest_common.sh@10 -- # set +x 00:06:11.576 00:40:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:11.576 00:40:04 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:11.576 00:40:04 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:11.576 00:40:04 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.576 00:40:04 -- common/autotest_common.sh@638 -- # local es=0 00:06:11.576 00:40:04 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.576 00:40:04 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.576 00:40:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:11.576 00:40:04 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.576 00:40:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:11.576 00:40:04 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.576 00:40:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:11.576 00:40:04 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.576 00:40:04 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:11.576 00:40:04 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.835 request: 00:06:11.835 { 00:06:11.835 "method": "env_dpdk_get_mem_stats", 00:06:11.835 "req_id": 1 00:06:11.835 } 00:06:11.835 Got JSON-RPC error response 00:06:11.835 response: 00:06:11.835 { 00:06:11.835 "code": -32601, 00:06:11.835 "message": "Method not found" 00:06:11.835 } 00:06:11.835 00:40:04 -- common/autotest_common.sh@641 -- # es=1 00:06:11.835 00:40:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:11.835 00:40:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:11.835 00:40:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:11.835 00:40:04 -- app/cmdline.sh@1 -- # killprocess 1528490 00:06:11.835 00:40:04 -- common/autotest_common.sh@936 -- # '[' -z 1528490 ']' 00:06:11.835 00:40:04 -- common/autotest_common.sh@940 -- # kill -0 1528490 00:06:11.835 00:40:04 -- common/autotest_common.sh@941 -- # uname 00:06:11.835 00:40:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.835 00:40:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1528490 00:06:11.835 00:40:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.835 00:40:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.835 00:40:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1528490' 00:06:11.835 killing process with pid 1528490 00:06:11.835 00:40:04 -- common/autotest_common.sh@955 -- # kill 1528490 00:06:11.835 00:40:04 -- common/autotest_common.sh@960 -- # wait 1528490 00:06:12.095 00:06:12.095 real 0m1.671s 00:06:12.095 user 0m1.964s 00:06:12.095 sys 0m0.430s 00:06:12.095 00:40:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.095 00:40:04 -- common/autotest_common.sh@10 -- # set +x 00:06:12.095 ************************************ 00:06:12.095 END TEST app_cmdline 00:06:12.095 ************************************ 00:06:12.095 00:40:04 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:12.095 00:40:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.095 00:40:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.095 00:40:04 -- common/autotest_common.sh@10 -- # set +x 00:06:12.354 ************************************ 00:06:12.354 START TEST version 00:06:12.354 ************************************ 00:06:12.354 00:40:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:12.354 * Looking for test storage... 00:06:12.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:12.354 00:40:04 -- app/version.sh@17 -- # get_header_version major 00:06:12.354 00:40:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.354 00:40:04 -- app/version.sh@14 -- # cut -f2 00:06:12.354 00:40:04 -- app/version.sh@14 -- # tr -d '"' 00:06:12.354 00:40:04 -- app/version.sh@17 -- # major=24 00:06:12.354 00:40:04 -- app/version.sh@18 -- # get_header_version minor 00:06:12.354 00:40:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.354 00:40:04 -- app/version.sh@14 -- # cut -f2 00:06:12.354 00:40:04 -- app/version.sh@14 -- # tr -d '"' 00:06:12.354 00:40:04 -- app/version.sh@18 -- # minor=5 00:06:12.354 00:40:04 -- app/version.sh@19 -- # get_header_version patch 00:06:12.354 00:40:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.354 00:40:04 -- app/version.sh@14 -- # cut -f2 00:06:12.354 00:40:04 -- app/version.sh@14 -- # tr -d '"' 00:06:12.354 00:40:04 -- app/version.sh@19 -- # patch=0 00:06:12.354 00:40:04 -- app/version.sh@20 -- # get_header_version suffix 00:06:12.354 00:40:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.354 00:40:04 -- app/version.sh@14 -- # cut -f2 00:06:12.354 00:40:04 -- app/version.sh@14 -- # tr -d '"' 00:06:12.354 00:40:04 -- app/version.sh@20 -- # suffix=-pre 00:06:12.354 00:40:04 -- app/version.sh@22 -- # version=24.5 00:06:12.354 00:40:04 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:12.354 00:40:04 -- app/version.sh@28 -- # version=24.5rc0 00:06:12.354 00:40:04 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:12.354 00:40:04 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:12.354 00:40:05 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:12.354 00:40:05 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:12.354 00:06:12.354 real 0m0.168s 00:06:12.354 user 0m0.072s 00:06:12.354 sys 0m0.133s 00:06:12.354 00:40:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.354 00:40:05 -- common/autotest_common.sh@10 -- # set +x 00:06:12.354 ************************************ 00:06:12.354 END TEST version 00:06:12.354 ************************************ 00:06:12.614 00:40:05 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:12.614 00:40:05 -- spdk/autotest.sh@194 -- # uname -s 00:06:12.614 00:40:05 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:12.614 00:40:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:12.614 00:40:05 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:12.614 00:40:05 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:12.614 00:40:05 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:12.614 00:40:05 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:12.614 00:40:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:12.614 00:40:05 -- common/autotest_common.sh@10 -- # set +x 00:06:12.614 00:40:05 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:12.614 00:40:05 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:12.614 00:40:05 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:12.614 00:40:05 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:12.614 00:40:05 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:12.614 00:40:05 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:12.614 00:40:05 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.614 00:40:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:12.614 00:40:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.614 00:40:05 -- common/autotest_common.sh@10 -- # set +x 00:06:12.614 ************************************ 00:06:12.614 START TEST nvmf_tcp 00:06:12.614 ************************************ 00:06:12.614 00:40:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.614 * Looking for test storage... 00:06:12.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:12.874 00:40:05 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:12.874 00:40:05 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:12.874 00:40:05 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.874 00:40:05 -- nvmf/common.sh@7 -- # uname -s 00:06:12.874 00:40:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.874 00:40:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.874 00:40:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.874 00:40:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.874 00:40:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.874 00:40:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.874 00:40:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.874 00:40:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.874 00:40:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.874 00:40:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.874 00:40:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:12.874 00:40:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:12.874 00:40:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.874 00:40:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.874 00:40:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.874 00:40:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.874 00:40:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.874 00:40:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.874 00:40:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.874 00:40:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.874 00:40:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.874 00:40:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.874 00:40:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.874 00:40:05 -- paths/export.sh@5 -- # export PATH 00:06:12.874 00:40:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.874 00:40:05 -- nvmf/common.sh@47 -- # : 0 00:06:12.874 00:40:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:12.874 00:40:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:12.874 00:40:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.874 00:40:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.874 00:40:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.874 00:40:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:12.874 00:40:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:12.874 00:40:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:12.874 00:40:05 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:12.874 00:40:05 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:12.874 00:40:05 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:12.874 00:40:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:12.874 00:40:05 -- common/autotest_common.sh@10 -- # set +x 00:06:12.875 00:40:05 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:12.875 00:40:05 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:12.875 00:40:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:12.875 00:40:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.875 00:40:05 -- common/autotest_common.sh@10 -- # set +x 00:06:12.875 ************************************ 00:06:12.875 START TEST nvmf_example 00:06:12.875 ************************************ 00:06:12.875 00:40:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:12.875 * Looking for test storage... 00:06:12.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:12.875 00:40:05 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.875 00:40:05 -- nvmf/common.sh@7 -- # uname -s 00:06:12.875 00:40:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.875 00:40:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.875 00:40:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.875 00:40:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.875 00:40:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.875 00:40:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.875 00:40:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.875 00:40:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.875 00:40:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.875 00:40:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.875 00:40:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:12.875 00:40:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:12.875 00:40:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.875 00:40:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.875 00:40:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.875 00:40:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.875 00:40:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:13.135 00:40:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.135 00:40:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.135 00:40:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.135 00:40:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.135 00:40:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.135 00:40:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.135 00:40:05 -- paths/export.sh@5 -- # export PATH 00:06:13.135 00:40:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.135 00:40:05 -- nvmf/common.sh@47 -- # : 0 00:06:13.135 00:40:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:13.135 00:40:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:13.135 00:40:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.135 00:40:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.135 00:40:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.135 00:40:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:13.135 00:40:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:13.135 00:40:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:13.135 00:40:05 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:13.135 00:40:05 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:13.135 00:40:05 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:13.135 00:40:05 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:13.135 00:40:05 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:13.135 00:40:05 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:13.135 00:40:05 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:13.135 00:40:05 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:13.135 00:40:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:13.135 00:40:05 -- common/autotest_common.sh@10 -- # set +x 00:06:13.135 00:40:05 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:13.135 00:40:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:13.135 00:40:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.135 00:40:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:13.135 00:40:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:13.135 00:40:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:13.135 00:40:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.135 00:40:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:13.135 00:40:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.135 00:40:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:13.135 00:40:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:13.135 00:40:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:13.135 00:40:05 -- common/autotest_common.sh@10 -- # set +x 00:06:18.405 00:40:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:18.405 00:40:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:18.405 00:40:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:18.405 00:40:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:18.405 00:40:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:18.405 00:40:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:18.405 00:40:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:18.405 00:40:10 -- nvmf/common.sh@295 -- # net_devs=() 00:06:18.405 00:40:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:18.405 00:40:10 -- nvmf/common.sh@296 -- # e810=() 00:06:18.405 00:40:10 -- nvmf/common.sh@296 -- # local -ga e810 00:06:18.405 00:40:10 -- nvmf/common.sh@297 -- # x722=() 00:06:18.405 00:40:10 -- nvmf/common.sh@297 -- # local -ga x722 00:06:18.405 00:40:10 -- nvmf/common.sh@298 -- # mlx=() 00:06:18.405 00:40:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:18.405 00:40:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:18.405 00:40:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:18.405 00:40:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:18.405 00:40:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:18.405 00:40:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:18.405 00:40:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:18.405 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:18.405 00:40:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:18.405 00:40:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:18.405 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:18.405 00:40:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:18.405 00:40:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:18.405 00:40:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.405 00:40:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:18.405 00:40:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.405 00:40:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:18.405 Found net devices under 0000:86:00.0: cvl_0_0 00:06:18.405 00:40:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.405 00:40:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:18.405 00:40:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:18.405 00:40:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:18.405 00:40:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:18.405 00:40:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:18.405 Found net devices under 0000:86:00.1: cvl_0_1 00:06:18.405 00:40:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:18.405 00:40:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:18.405 00:40:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:18.405 00:40:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:18.405 00:40:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:18.405 00:40:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:18.405 00:40:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:18.405 00:40:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:18.405 00:40:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:18.405 00:40:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:18.405 00:40:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:18.405 00:40:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:18.405 00:40:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:18.405 00:40:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:18.405 00:40:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:18.405 00:40:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:18.405 00:40:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:18.405 00:40:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:18.405 00:40:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:18.405 00:40:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:18.405 00:40:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:18.405 00:40:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:18.405 00:40:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:18.405 00:40:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:18.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:18.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:06:18.405 00:06:18.405 --- 10.0.0.2 ping statistics --- 00:06:18.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.405 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:06:18.405 00:40:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:18.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:18.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:06:18.405 00:06:18.405 --- 10.0.0.1 ping statistics --- 00:06:18.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:18.405 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:06:18.405 00:40:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:18.405 00:40:10 -- nvmf/common.sh@411 -- # return 0 00:06:18.405 00:40:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:18.405 00:40:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:18.405 00:40:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:18.405 00:40:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:18.405 00:40:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:18.405 00:40:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:18.405 00:40:10 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:18.405 00:40:10 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:18.405 00:40:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:18.405 00:40:10 -- common/autotest_common.sh@10 -- # set +x 00:06:18.405 00:40:10 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:18.405 00:40:10 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:18.405 00:40:10 -- target/nvmf_example.sh@34 -- # nvmfpid=1532127 00:06:18.405 00:40:10 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:18.405 00:40:10 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:18.405 00:40:10 -- target/nvmf_example.sh@36 -- # waitforlisten 1532127 00:06:18.405 00:40:10 -- common/autotest_common.sh@817 -- # '[' -z 1532127 ']' 00:06:18.405 00:40:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.405 00:40:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:18.405 00:40:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.405 00:40:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:18.405 00:40:10 -- common/autotest_common.sh@10 -- # set +x 00:06:18.405 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.342 00:40:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.342 00:40:11 -- common/autotest_common.sh@850 -- # return 0 00:06:19.342 00:40:11 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:19.342 00:40:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:19.342 00:40:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.342 00:40:11 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:19.342 00:40:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.342 00:40:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.342 00:40:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.342 00:40:11 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:19.342 00:40:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.342 00:40:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.342 00:40:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.342 00:40:11 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:19.342 00:40:11 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:19.342 00:40:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.342 00:40:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.342 00:40:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.342 00:40:11 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:19.342 00:40:11 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:19.342 00:40:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.342 00:40:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.342 00:40:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.342 00:40:11 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:19.342 00:40:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:19.342 00:40:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.342 00:40:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:19.342 00:40:11 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:19.342 00:40:11 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:19.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.326 Initializing NVMe Controllers 00:06:29.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:29.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:29.326 Initialization complete. Launching workers. 00:06:29.326 ======================================================== 00:06:29.326 Latency(us) 00:06:29.326 Device Information : IOPS MiB/s Average min max 00:06:29.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13781.32 53.83 4643.98 711.45 18003.52 00:06:29.326 ======================================================== 00:06:29.326 Total : 13781.32 53.83 4643.98 711.45 18003.52 00:06:29.326 00:06:29.326 00:40:22 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:29.326 00:40:22 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:29.326 00:40:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:06:29.326 00:40:22 -- nvmf/common.sh@117 -- # sync 00:06:29.585 00:40:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:29.585 00:40:22 -- nvmf/common.sh@120 -- # set +e 00:06:29.585 00:40:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:29.585 00:40:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:29.585 rmmod nvme_tcp 00:06:29.585 rmmod nvme_fabrics 00:06:29.585 rmmod nvme_keyring 00:06:29.585 00:40:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:29.585 00:40:22 -- nvmf/common.sh@124 -- # set -e 00:06:29.585 00:40:22 -- nvmf/common.sh@125 -- # return 0 00:06:29.585 00:40:22 -- nvmf/common.sh@478 -- # '[' -n 1532127 ']' 00:06:29.585 00:40:22 -- nvmf/common.sh@479 -- # killprocess 1532127 00:06:29.585 00:40:22 -- common/autotest_common.sh@936 -- # '[' -z 1532127 ']' 00:06:29.585 00:40:22 -- common/autotest_common.sh@940 -- # kill -0 1532127 00:06:29.585 00:40:22 -- common/autotest_common.sh@941 -- # uname 00:06:29.585 00:40:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.585 00:40:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1532127 00:06:29.585 00:40:22 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:06:29.585 00:40:22 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:06:29.585 00:40:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1532127' 00:06:29.585 killing process with pid 1532127 00:06:29.585 00:40:22 -- common/autotest_common.sh@955 -- # kill 1532127 00:06:29.585 00:40:22 -- common/autotest_common.sh@960 -- # wait 1532127 00:06:29.844 nvmf threads initialize successfully 00:06:29.844 bdev subsystem init successfully 00:06:29.844 created a nvmf target service 00:06:29.844 create targets's poll groups done 00:06:29.844 all subsystems of target started 00:06:29.844 nvmf target is running 00:06:29.844 all subsystems of target stopped 00:06:29.844 destroy targets's poll groups done 00:06:29.844 destroyed the nvmf target service 00:06:29.844 bdev subsystem finish successfully 00:06:29.844 nvmf threads destroy successfully 00:06:29.844 00:40:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:06:29.844 00:40:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:06:29.844 00:40:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:06:29.844 00:40:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:29.844 00:40:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:29.844 00:40:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.844 00:40:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:29.844 00:40:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.756 00:40:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:31.756 00:40:24 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:31.756 00:40:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:31.756 00:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:31.756 00:06:31.756 real 0m18.969s 00:06:31.756 user 0m45.626s 00:06:31.756 sys 0m5.345s 00:06:31.756 00:40:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.756 00:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:31.756 ************************************ 00:06:31.756 END TEST nvmf_example 00:06:31.756 ************************************ 00:06:32.016 00:40:24 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:32.016 00:40:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:32.016 00:40:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.016 00:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:32.016 ************************************ 00:06:32.016 START TEST nvmf_filesystem 00:06:32.016 ************************************ 00:06:32.016 00:40:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:32.016 * Looking for test storage... 00:06:32.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.016 00:40:24 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:32.016 00:40:24 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:32.016 00:40:24 -- common/autotest_common.sh@34 -- # set -e 00:06:32.016 00:40:24 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:32.016 00:40:24 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:32.016 00:40:24 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:32.016 00:40:24 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:32.016 00:40:24 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:32.016 00:40:24 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:32.016 00:40:24 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:32.016 00:40:24 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:32.016 00:40:24 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:32.016 00:40:24 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:32.016 00:40:24 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:32.016 00:40:24 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:32.016 00:40:24 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:32.016 00:40:24 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:32.016 00:40:24 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:32.016 00:40:24 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:32.016 00:40:24 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:32.016 00:40:24 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:32.016 00:40:24 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:32.016 00:40:24 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:32.016 00:40:24 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:32.016 00:40:24 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:32.016 00:40:24 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:32.016 00:40:24 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:32.016 00:40:24 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:32.016 00:40:24 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:32.016 00:40:24 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:32.016 00:40:24 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:32.016 00:40:24 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:32.016 00:40:24 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:32.016 00:40:24 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:32.016 00:40:24 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:32.016 00:40:24 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:32.016 00:40:24 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:32.016 00:40:24 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:32.016 00:40:24 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:32.016 00:40:24 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:32.016 00:40:24 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:32.016 00:40:24 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:32.016 00:40:24 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:32.016 00:40:24 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:32.016 00:40:24 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:32.016 00:40:24 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:32.016 00:40:24 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:32.016 00:40:24 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:32.017 00:40:24 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:32.017 00:40:24 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:32.017 00:40:24 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:32.017 00:40:24 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:32.017 00:40:24 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:32.017 00:40:24 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:32.017 00:40:24 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:32.017 00:40:24 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:32.017 00:40:24 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:32.017 00:40:24 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:32.017 00:40:24 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:06:32.017 00:40:24 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:32.017 00:40:24 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:06:32.017 00:40:24 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:06:32.017 00:40:24 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:06:32.017 00:40:24 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:06:32.017 00:40:24 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:06:32.017 00:40:24 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:06:32.017 00:40:24 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:06:32.017 00:40:24 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:06:32.017 00:40:24 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:06:32.017 00:40:24 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:06:32.017 00:40:24 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:06:32.017 00:40:24 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:06:32.017 00:40:24 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:06:32.017 00:40:24 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:06:32.017 00:40:24 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:06:32.017 00:40:24 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:32.017 00:40:24 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:06:32.017 00:40:24 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:06:32.017 00:40:24 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:06:32.017 00:40:24 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:06:32.017 00:40:24 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:06:32.017 00:40:24 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:06:32.017 00:40:24 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:06:32.017 00:40:24 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:06:32.017 00:40:24 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:06:32.017 00:40:24 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:06:32.017 00:40:24 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:06:32.017 00:40:24 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:32.017 00:40:24 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:06:32.017 00:40:24 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:06:32.017 00:40:24 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:32.017 00:40:24 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:32.017 00:40:24 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:32.017 00:40:24 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:32.017 00:40:24 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.017 00:40:24 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:32.017 00:40:24 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:32.017 00:40:24 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:32.017 00:40:24 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:32.017 00:40:24 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:32.017 00:40:24 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:32.017 00:40:24 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:32.017 00:40:24 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:32.017 00:40:24 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:32.017 00:40:24 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:32.017 00:40:24 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:32.017 #define SPDK_CONFIG_H 00:06:32.017 #define SPDK_CONFIG_APPS 1 00:06:32.017 #define SPDK_CONFIG_ARCH native 00:06:32.017 #undef SPDK_CONFIG_ASAN 00:06:32.017 #undef SPDK_CONFIG_AVAHI 00:06:32.017 #undef SPDK_CONFIG_CET 00:06:32.017 #define SPDK_CONFIG_COVERAGE 1 00:06:32.017 #define SPDK_CONFIG_CROSS_PREFIX 00:06:32.017 #undef SPDK_CONFIG_CRYPTO 00:06:32.017 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:32.017 #undef SPDK_CONFIG_CUSTOMOCF 00:06:32.017 #undef SPDK_CONFIG_DAOS 00:06:32.017 #define SPDK_CONFIG_DAOS_DIR 00:06:32.017 #define SPDK_CONFIG_DEBUG 1 00:06:32.017 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:32.017 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:32.017 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:32.017 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:32.017 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:32.017 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:32.017 #define SPDK_CONFIG_EXAMPLES 1 00:06:32.017 #undef SPDK_CONFIG_FC 00:06:32.017 #define SPDK_CONFIG_FC_PATH 00:06:32.017 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:32.017 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:32.017 #undef SPDK_CONFIG_FUSE 00:06:32.017 #undef SPDK_CONFIG_FUZZER 00:06:32.017 #define SPDK_CONFIG_FUZZER_LIB 00:06:32.017 #undef SPDK_CONFIG_GOLANG 00:06:32.017 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:32.017 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:32.017 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:32.017 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:32.017 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:32.017 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:32.017 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:32.017 #define SPDK_CONFIG_IDXD 1 00:06:32.017 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:32.017 #undef SPDK_CONFIG_IPSEC_MB 00:06:32.017 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:32.017 #define SPDK_CONFIG_ISAL 1 00:06:32.017 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:32.017 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:32.017 #define SPDK_CONFIG_LIBDIR 00:06:32.017 #undef SPDK_CONFIG_LTO 00:06:32.017 #define SPDK_CONFIG_MAX_LCORES 00:06:32.017 #define SPDK_CONFIG_NVME_CUSE 1 00:06:32.017 #undef SPDK_CONFIG_OCF 00:06:32.017 #define SPDK_CONFIG_OCF_PATH 00:06:32.017 #define SPDK_CONFIG_OPENSSL_PATH 00:06:32.017 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:32.017 #define SPDK_CONFIG_PGO_DIR 00:06:32.017 #undef SPDK_CONFIG_PGO_USE 00:06:32.017 #define SPDK_CONFIG_PREFIX /usr/local 00:06:32.017 #undef SPDK_CONFIG_RAID5F 00:06:32.017 #undef SPDK_CONFIG_RBD 00:06:32.017 #define SPDK_CONFIG_RDMA 1 00:06:32.017 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:32.017 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:32.017 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:32.017 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:32.017 #define SPDK_CONFIG_SHARED 1 00:06:32.017 #undef SPDK_CONFIG_SMA 00:06:32.017 #define SPDK_CONFIG_TESTS 1 00:06:32.017 #undef SPDK_CONFIG_TSAN 00:06:32.017 #define SPDK_CONFIG_UBLK 1 00:06:32.017 #define SPDK_CONFIG_UBSAN 1 00:06:32.017 #undef SPDK_CONFIG_UNIT_TESTS 00:06:32.017 #undef SPDK_CONFIG_URING 00:06:32.017 #define SPDK_CONFIG_URING_PATH 00:06:32.017 #undef SPDK_CONFIG_URING_ZNS 00:06:32.017 #undef SPDK_CONFIG_USDT 00:06:32.017 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:32.017 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:32.017 #define SPDK_CONFIG_VFIO_USER 1 00:06:32.017 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:32.017 #define SPDK_CONFIG_VHOST 1 00:06:32.017 #define SPDK_CONFIG_VIRTIO 1 00:06:32.017 #undef SPDK_CONFIG_VTUNE 00:06:32.017 #define SPDK_CONFIG_VTUNE_DIR 00:06:32.017 #define SPDK_CONFIG_WERROR 1 00:06:32.017 #define SPDK_CONFIG_WPDK_DIR 00:06:32.017 #undef SPDK_CONFIG_XNVME 00:06:32.017 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:32.017 00:40:24 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:32.017 00:40:24 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.017 00:40:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.017 00:40:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.017 00:40:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.017 00:40:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.017 00:40:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.017 00:40:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.017 00:40:24 -- paths/export.sh@5 -- # export PATH 00:06:32.017 00:40:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.017 00:40:24 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:32.018 00:40:24 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:32.278 00:40:24 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:32.278 00:40:24 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:32.278 00:40:24 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:32.278 00:40:24 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:32.278 00:40:24 -- pm/common@67 -- # TEST_TAG=N/A 00:06:32.278 00:40:24 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:32.278 00:40:24 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:32.278 00:40:24 -- pm/common@71 -- # uname -s 00:06:32.278 00:40:24 -- pm/common@71 -- # PM_OS=Linux 00:06:32.278 00:40:24 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:32.278 00:40:24 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:06:32.278 00:40:24 -- pm/common@76 -- # [[ Linux == Linux ]] 00:06:32.278 00:40:24 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:06:32.278 00:40:24 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:06:32.278 00:40:24 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:32.278 00:40:24 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:32.278 00:40:24 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:06:32.278 00:40:24 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:06:32.278 00:40:24 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:32.278 00:40:24 -- common/autotest_common.sh@57 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:32.278 00:40:24 -- common/autotest_common.sh@61 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:32.278 00:40:24 -- common/autotest_common.sh@63 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:32.278 00:40:24 -- common/autotest_common.sh@65 -- # : 1 00:06:32.278 00:40:24 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:32.278 00:40:24 -- common/autotest_common.sh@67 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:32.278 00:40:24 -- common/autotest_common.sh@69 -- # : 00:06:32.278 00:40:24 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:32.278 00:40:24 -- common/autotest_common.sh@71 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:32.278 00:40:24 -- common/autotest_common.sh@73 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:32.278 00:40:24 -- common/autotest_common.sh@75 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:32.278 00:40:24 -- common/autotest_common.sh@77 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:32.278 00:40:24 -- common/autotest_common.sh@79 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:32.278 00:40:24 -- common/autotest_common.sh@81 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:32.278 00:40:24 -- common/autotest_common.sh@83 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:32.278 00:40:24 -- common/autotest_common.sh@85 -- # : 1 00:06:32.278 00:40:24 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:32.278 00:40:24 -- common/autotest_common.sh@87 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:32.278 00:40:24 -- common/autotest_common.sh@89 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:32.278 00:40:24 -- common/autotest_common.sh@91 -- # : 1 00:06:32.278 00:40:24 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:32.278 00:40:24 -- common/autotest_common.sh@93 -- # : 1 00:06:32.278 00:40:24 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:32.278 00:40:24 -- common/autotest_common.sh@95 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:32.278 00:40:24 -- common/autotest_common.sh@97 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:32.278 00:40:24 -- common/autotest_common.sh@99 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:32.278 00:40:24 -- common/autotest_common.sh@101 -- # : tcp 00:06:32.278 00:40:24 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:32.278 00:40:24 -- common/autotest_common.sh@103 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:32.278 00:40:24 -- common/autotest_common.sh@105 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:32.278 00:40:24 -- common/autotest_common.sh@107 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:32.278 00:40:24 -- common/autotest_common.sh@109 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:32.278 00:40:24 -- common/autotest_common.sh@111 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:32.278 00:40:24 -- common/autotest_common.sh@113 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:32.278 00:40:24 -- common/autotest_common.sh@115 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:32.278 00:40:24 -- common/autotest_common.sh@117 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:32.278 00:40:24 -- common/autotest_common.sh@119 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:32.278 00:40:24 -- common/autotest_common.sh@121 -- # : 1 00:06:32.278 00:40:24 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:32.278 00:40:24 -- common/autotest_common.sh@123 -- # : 00:06:32.278 00:40:24 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:32.278 00:40:24 -- common/autotest_common.sh@125 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:32.278 00:40:24 -- common/autotest_common.sh@127 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:32.278 00:40:24 -- common/autotest_common.sh@129 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:32.278 00:40:24 -- common/autotest_common.sh@131 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:32.278 00:40:24 -- common/autotest_common.sh@133 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:32.278 00:40:24 -- common/autotest_common.sh@135 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:32.278 00:40:24 -- common/autotest_common.sh@137 -- # : 00:06:32.278 00:40:24 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:32.278 00:40:24 -- common/autotest_common.sh@139 -- # : true 00:06:32.278 00:40:24 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:32.278 00:40:24 -- common/autotest_common.sh@141 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:32.278 00:40:24 -- common/autotest_common.sh@143 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:32.278 00:40:24 -- common/autotest_common.sh@145 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:32.278 00:40:24 -- common/autotest_common.sh@147 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:32.278 00:40:24 -- common/autotest_common.sh@149 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:32.278 00:40:24 -- common/autotest_common.sh@151 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:32.278 00:40:24 -- common/autotest_common.sh@153 -- # : e810 00:06:32.278 00:40:24 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:32.278 00:40:24 -- common/autotest_common.sh@155 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:32.278 00:40:24 -- common/autotest_common.sh@157 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:32.278 00:40:24 -- common/autotest_common.sh@159 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:32.278 00:40:24 -- common/autotest_common.sh@161 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:32.278 00:40:24 -- common/autotest_common.sh@163 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:32.278 00:40:24 -- common/autotest_common.sh@166 -- # : 00:06:32.278 00:40:24 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:32.278 00:40:24 -- common/autotest_common.sh@168 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:32.278 00:40:24 -- common/autotest_common.sh@170 -- # : 0 00:06:32.278 00:40:24 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:32.278 00:40:24 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:32.278 00:40:24 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:32.278 00:40:24 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:32.278 00:40:24 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:32.278 00:40:24 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.278 00:40:24 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.278 00:40:24 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.278 00:40:24 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:32.278 00:40:24 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:32.278 00:40:24 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:32.278 00:40:24 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:32.278 00:40:24 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:32.278 00:40:24 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:32.278 00:40:24 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:32.278 00:40:24 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:32.278 00:40:24 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:32.278 00:40:24 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:32.278 00:40:24 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:32.278 00:40:24 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:32.278 00:40:24 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:32.278 00:40:24 -- common/autotest_common.sh@199 -- # cat 00:06:32.278 00:40:24 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:06:32.278 00:40:24 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:32.278 00:40:24 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:32.278 00:40:24 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:32.278 00:40:24 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:32.278 00:40:24 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:06:32.278 00:40:24 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:06:32.278 00:40:24 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:32.278 00:40:24 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:32.278 00:40:24 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:32.278 00:40:24 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:32.278 00:40:24 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:32.278 00:40:24 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:32.278 00:40:24 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:32.278 00:40:24 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:32.278 00:40:24 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:32.278 00:40:24 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:32.278 00:40:24 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:32.278 00:40:24 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:32.278 00:40:24 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:06:32.278 00:40:24 -- common/autotest_common.sh@252 -- # export valgrind= 00:06:32.278 00:40:24 -- common/autotest_common.sh@252 -- # valgrind= 00:06:32.278 00:40:24 -- common/autotest_common.sh@258 -- # uname -s 00:06:32.278 00:40:24 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:06:32.278 00:40:24 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:06:32.278 00:40:24 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:06:32.278 00:40:24 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:06:32.279 00:40:24 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@268 -- # MAKE=make 00:06:32.279 00:40:24 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j96 00:06:32.279 00:40:24 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:06:32.279 00:40:24 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:06:32.279 00:40:24 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:06:32.279 00:40:24 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:06:32.279 00:40:24 -- common/autotest_common.sh@289 -- # for i in "$@" 00:06:32.279 00:40:24 -- common/autotest_common.sh@290 -- # case "$i" in 00:06:32.279 00:40:24 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:06:32.279 00:40:24 -- common/autotest_common.sh@307 -- # [[ -z 1534548 ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@307 -- # kill -0 1534548 00:06:32.279 00:40:24 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:06:32.279 00:40:24 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:06:32.279 00:40:24 -- common/autotest_common.sh@320 -- # local mount target_dir 00:06:32.279 00:40:24 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:06:32.279 00:40:24 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:06:32.279 00:40:24 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:06:32.279 00:40:24 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:06:32.279 00:40:24 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.J049p4 00:06:32.279 00:40:24 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:32.279 00:40:24 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.J049p4/tests/target /tmp/spdk.J049p4 00:06:32.279 00:40:24 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:06:32.279 00:40:24 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.279 00:40:24 -- common/autotest_common.sh@316 -- # df -T 00:06:32.279 00:40:24 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:06:32.279 00:40:24 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:06:32.279 00:40:24 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # avails["$mount"]=996753408 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:06:32.279 00:40:24 -- common/autotest_common.sh@352 -- # uses["$mount"]=4287676416 00:06:32.279 00:40:24 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # avails["$mount"]=186307137536 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # sizes["$mount"]=195974328320 00:06:32.279 00:40:24 -- common/autotest_common.sh@352 -- # uses["$mount"]=9667190784 00:06:32.279 00:40:24 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # avails["$mount"]=97933623296 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # sizes["$mount"]=97987162112 00:06:32.279 00:40:24 -- common/autotest_common.sh@352 -- # uses["$mount"]=53538816 00:06:32.279 00:40:24 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # avails["$mount"]=39185489920 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # sizes["$mount"]=39194865664 00:06:32.279 00:40:24 -- common/autotest_common.sh@352 -- # uses["$mount"]=9375744 00:06:32.279 00:40:24 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # avails["$mount"]=97986146304 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # sizes["$mount"]=97987166208 00:06:32.279 00:40:24 -- common/autotest_common.sh@352 -- # uses["$mount"]=1019904 00:06:32.279 00:40:24 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:06:32.279 00:40:24 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # avails["$mount"]=19597426688 00:06:32.279 00:40:24 -- common/autotest_common.sh@351 -- # sizes["$mount"]=19597430784 00:06:32.279 00:40:24 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:06:32.279 00:40:24 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:06:32.279 00:40:24 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:06:32.279 * Looking for test storage... 00:06:32.279 00:40:24 -- common/autotest_common.sh@357 -- # local target_space new_size 00:06:32.279 00:40:24 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:06:32.279 00:40:24 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.279 00:40:24 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:32.279 00:40:24 -- common/autotest_common.sh@361 -- # mount=/ 00:06:32.279 00:40:24 -- common/autotest_common.sh@363 -- # target_space=186307137536 00:06:32.279 00:40:24 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:06:32.279 00:40:24 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:06:32.279 00:40:24 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@370 -- # new_size=11881783296 00:06:32.279 00:40:24 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:32.279 00:40:24 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.279 00:40:24 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.279 00:40:24 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.279 00:40:24 -- common/autotest_common.sh@378 -- # return 0 00:06:32.279 00:40:24 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:06:32.279 00:40:24 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:06:32.279 00:40:24 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:32.279 00:40:24 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:32.279 00:40:24 -- common/autotest_common.sh@1673 -- # true 00:06:32.279 00:40:24 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:06:32.279 00:40:24 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:32.279 00:40:24 -- common/autotest_common.sh@27 -- # exec 00:06:32.279 00:40:24 -- common/autotest_common.sh@29 -- # exec 00:06:32.279 00:40:24 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:32.279 00:40:24 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:32.279 00:40:24 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:32.279 00:40:24 -- common/autotest_common.sh@18 -- # set -x 00:06:32.279 00:40:24 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.279 00:40:24 -- nvmf/common.sh@7 -- # uname -s 00:06:32.279 00:40:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.279 00:40:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.279 00:40:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.279 00:40:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.279 00:40:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.279 00:40:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.279 00:40:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.279 00:40:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.279 00:40:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.279 00:40:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.279 00:40:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:32.279 00:40:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:32.279 00:40:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.279 00:40:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.279 00:40:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.279 00:40:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.279 00:40:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.279 00:40:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.279 00:40:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.279 00:40:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.279 00:40:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.279 00:40:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.279 00:40:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.279 00:40:24 -- paths/export.sh@5 -- # export PATH 00:06:32.279 00:40:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.279 00:40:24 -- nvmf/common.sh@47 -- # : 0 00:06:32.279 00:40:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:32.279 00:40:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:32.279 00:40:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.279 00:40:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.279 00:40:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.279 00:40:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:32.279 00:40:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:32.279 00:40:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:32.279 00:40:24 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:32.279 00:40:24 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:32.279 00:40:24 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:32.279 00:40:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:32.279 00:40:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.279 00:40:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:32.279 00:40:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:32.279 00:40:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:32.279 00:40:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.279 00:40:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:32.279 00:40:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.279 00:40:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:32.279 00:40:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:32.279 00:40:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:32.279 00:40:24 -- common/autotest_common.sh@10 -- # set +x 00:06:37.555 00:40:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:37.555 00:40:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:06:37.555 00:40:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:37.555 00:40:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:37.555 00:40:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:37.555 00:40:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:37.555 00:40:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:37.555 00:40:29 -- nvmf/common.sh@295 -- # net_devs=() 00:06:37.555 00:40:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:37.555 00:40:29 -- nvmf/common.sh@296 -- # e810=() 00:06:37.555 00:40:29 -- nvmf/common.sh@296 -- # local -ga e810 00:06:37.555 00:40:29 -- nvmf/common.sh@297 -- # x722=() 00:06:37.555 00:40:29 -- nvmf/common.sh@297 -- # local -ga x722 00:06:37.555 00:40:29 -- nvmf/common.sh@298 -- # mlx=() 00:06:37.555 00:40:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:06:37.555 00:40:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.555 00:40:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:37.555 00:40:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:37.555 00:40:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:37.555 00:40:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:37.555 00:40:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:37.555 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:37.555 00:40:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:37.555 00:40:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:37.555 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:37.555 00:40:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:37.555 00:40:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:37.555 00:40:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.555 00:40:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:37.555 00:40:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.555 00:40:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:37.555 Found net devices under 0000:86:00.0: cvl_0_0 00:06:37.555 00:40:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.555 00:40:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:37.555 00:40:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.555 00:40:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:06:37.555 00:40:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.555 00:40:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:37.555 Found net devices under 0000:86:00.1: cvl_0_1 00:06:37.555 00:40:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.555 00:40:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:06:37.555 00:40:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:06:37.555 00:40:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:06:37.555 00:40:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:06:37.555 00:40:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.555 00:40:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.555 00:40:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.555 00:40:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:37.555 00:40:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.555 00:40:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.555 00:40:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:37.555 00:40:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.555 00:40:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.555 00:40:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:37.555 00:40:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:37.555 00:40:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.555 00:40:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.555 00:40:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.555 00:40:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.555 00:40:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:37.555 00:40:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.555 00:40:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.555 00:40:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.555 00:40:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:37.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:06:37.555 00:06:37.555 --- 10.0.0.2 ping statistics --- 00:06:37.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.555 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:06:37.555 00:40:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:06:37.555 00:06:37.555 --- 10.0.0.1 ping statistics --- 00:06:37.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.555 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:06:37.555 00:40:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.555 00:40:30 -- nvmf/common.sh@411 -- # return 0 00:06:37.555 00:40:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:06:37.555 00:40:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.555 00:40:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:06:37.555 00:40:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:06:37.556 00:40:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.556 00:40:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:06:37.556 00:40:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:06:37.556 00:40:30 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:37.556 00:40:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:37.556 00:40:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.556 00:40:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.816 ************************************ 00:06:37.816 START TEST nvmf_filesystem_no_in_capsule 00:06:37.816 ************************************ 00:06:37.816 00:40:30 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:06:37.816 00:40:30 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:37.816 00:40:30 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:37.816 00:40:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:37.816 00:40:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:37.816 00:40:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.816 00:40:30 -- nvmf/common.sh@470 -- # nvmfpid=1537579 00:06:37.816 00:40:30 -- nvmf/common.sh@471 -- # waitforlisten 1537579 00:06:37.816 00:40:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:37.816 00:40:30 -- common/autotest_common.sh@817 -- # '[' -z 1537579 ']' 00:06:37.816 00:40:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.816 00:40:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:37.816 00:40:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.816 00:40:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:37.816 00:40:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.816 [2024-04-27 00:40:30.332185] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:37.816 [2024-04-27 00:40:30.332224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.816 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.816 [2024-04-27 00:40:30.388210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.816 [2024-04-27 00:40:30.468019] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.816 [2024-04-27 00:40:30.468054] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.816 [2024-04-27 00:40:30.468061] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.816 [2024-04-27 00:40:30.468067] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.816 [2024-04-27 00:40:30.468092] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.816 [2024-04-27 00:40:30.468131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.816 [2024-04-27 00:40:30.468200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.816 [2024-04-27 00:40:30.468291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.816 [2024-04-27 00:40:30.468292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.754 00:40:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:38.754 00:40:31 -- common/autotest_common.sh@850 -- # return 0 00:06:38.754 00:40:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:38.754 00:40:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:38.754 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 00:40:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.754 00:40:31 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:38.754 00:40:31 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:38.754 00:40:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:38.754 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 [2024-04-27 00:40:31.186035] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.754 00:40:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:38.754 00:40:31 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:38.754 00:40:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:38.754 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 Malloc1 00:06:38.754 00:40:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:38.754 00:40:31 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:38.754 00:40:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:38.754 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 00:40:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:38.754 00:40:31 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:38.754 00:40:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:38.754 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 00:40:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:38.754 00:40:31 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:38.754 00:40:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:38.754 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 [2024-04-27 00:40:31.333961] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.754 00:40:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:38.754 00:40:31 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:38.754 00:40:31 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:38.754 00:40:31 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:38.754 00:40:31 -- common/autotest_common.sh@1366 -- # local bs 00:06:38.754 00:40:31 -- common/autotest_common.sh@1367 -- # local nb 00:06:38.754 00:40:31 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:38.754 00:40:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:38.754 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.754 00:40:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:38.754 00:40:31 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:38.754 { 00:06:38.754 "name": "Malloc1", 00:06:38.754 "aliases": [ 00:06:38.754 "77910052-acd4-491d-8444-978689a8a160" 00:06:38.754 ], 00:06:38.754 "product_name": "Malloc disk", 00:06:38.754 "block_size": 512, 00:06:38.754 "num_blocks": 1048576, 00:06:38.754 "uuid": "77910052-acd4-491d-8444-978689a8a160", 00:06:38.754 "assigned_rate_limits": { 00:06:38.754 "rw_ios_per_sec": 0, 00:06:38.754 "rw_mbytes_per_sec": 0, 00:06:38.754 "r_mbytes_per_sec": 0, 00:06:38.754 "w_mbytes_per_sec": 0 00:06:38.754 }, 00:06:38.754 "claimed": true, 00:06:38.754 "claim_type": "exclusive_write", 00:06:38.754 "zoned": false, 00:06:38.754 "supported_io_types": { 00:06:38.754 "read": true, 00:06:38.754 "write": true, 00:06:38.754 "unmap": true, 00:06:38.754 "write_zeroes": true, 00:06:38.754 "flush": true, 00:06:38.754 "reset": true, 00:06:38.754 "compare": false, 00:06:38.754 "compare_and_write": false, 00:06:38.754 "abort": true, 00:06:38.754 "nvme_admin": false, 00:06:38.754 "nvme_io": false 00:06:38.754 }, 00:06:38.754 "memory_domains": [ 00:06:38.754 { 00:06:38.754 "dma_device_id": "system", 00:06:38.754 "dma_device_type": 1 00:06:38.754 }, 00:06:38.754 { 00:06:38.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.754 "dma_device_type": 2 00:06:38.754 } 00:06:38.754 ], 00:06:38.754 "driver_specific": {} 00:06:38.754 } 00:06:38.754 ]' 00:06:38.754 00:40:31 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:38.754 00:40:31 -- common/autotest_common.sh@1369 -- # bs=512 00:06:38.754 00:40:31 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:38.754 00:40:31 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:38.754 00:40:31 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:38.754 00:40:31 -- common/autotest_common.sh@1374 -- # echo 512 00:06:38.754 00:40:31 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:38.754 00:40:31 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:40.132 00:40:32 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:40.132 00:40:32 -- common/autotest_common.sh@1184 -- # local i=0 00:06:40.132 00:40:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:40.132 00:40:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:40.132 00:40:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:42.046 00:40:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:42.046 00:40:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:42.046 00:40:34 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:42.046 00:40:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:42.046 00:40:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:42.046 00:40:34 -- common/autotest_common.sh@1194 -- # return 0 00:06:42.046 00:40:34 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:42.046 00:40:34 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:42.046 00:40:34 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:42.046 00:40:34 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:42.046 00:40:34 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:42.046 00:40:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:42.046 00:40:34 -- setup/common.sh@80 -- # echo 536870912 00:06:42.046 00:40:34 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:42.046 00:40:34 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:42.046 00:40:34 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:42.046 00:40:34 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:42.614 00:40:35 -- target/filesystem.sh@69 -- # partprobe 00:06:42.873 00:40:35 -- target/filesystem.sh@70 -- # sleep 1 00:06:44.251 00:40:36 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:44.251 00:40:36 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:44.251 00:40:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:44.251 00:40:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.251 00:40:36 -- common/autotest_common.sh@10 -- # set +x 00:06:44.251 ************************************ 00:06:44.251 START TEST filesystem_ext4 00:06:44.251 ************************************ 00:06:44.251 00:40:36 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:44.251 00:40:36 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:44.251 00:40:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:44.251 00:40:36 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:44.251 00:40:36 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:44.251 00:40:36 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:44.251 00:40:36 -- common/autotest_common.sh@914 -- # local i=0 00:06:44.251 00:40:36 -- common/autotest_common.sh@915 -- # local force 00:06:44.251 00:40:36 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:44.251 00:40:36 -- common/autotest_common.sh@918 -- # force=-F 00:06:44.251 00:40:36 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:44.251 mke2fs 1.46.5 (30-Dec-2021) 00:06:44.251 Discarding device blocks: 0/522240 done 00:06:44.251 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:44.251 Filesystem UUID: 64b1907e-4a31-49a2-9cdc-6b2955a44934 00:06:44.251 Superblock backups stored on blocks: 00:06:44.251 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:44.251 00:06:44.251 Allocating group tables: 0/64 done 00:06:44.251 Writing inode tables: 0/64 done 00:06:44.509 Creating journal (8192 blocks): done 00:06:45.447 Writing superblocks and filesystem accounting information: 0/64 done 00:06:45.447 00:06:45.447 00:40:38 -- common/autotest_common.sh@931 -- # return 0 00:06:45.447 00:40:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:45.707 00:40:38 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:45.707 00:40:38 -- target/filesystem.sh@25 -- # sync 00:06:45.707 00:40:38 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:45.707 00:40:38 -- target/filesystem.sh@27 -- # sync 00:06:45.967 00:40:38 -- target/filesystem.sh@29 -- # i=0 00:06:45.967 00:40:38 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:45.967 00:40:38 -- target/filesystem.sh@37 -- # kill -0 1537579 00:06:45.967 00:40:38 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:45.967 00:40:38 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:45.967 00:40:38 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:45.967 00:40:38 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:45.967 00:06:45.967 real 0m1.759s 00:06:45.967 user 0m0.022s 00:06:45.967 sys 0m0.069s 00:06:45.967 00:40:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.967 00:40:38 -- common/autotest_common.sh@10 -- # set +x 00:06:45.967 ************************************ 00:06:45.967 END TEST filesystem_ext4 00:06:45.967 ************************************ 00:06:45.967 00:40:38 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:45.967 00:40:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:45.967 00:40:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.967 00:40:38 -- common/autotest_common.sh@10 -- # set +x 00:06:45.967 ************************************ 00:06:45.967 START TEST filesystem_btrfs 00:06:45.967 ************************************ 00:06:45.967 00:40:38 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:45.967 00:40:38 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:45.967 00:40:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:45.967 00:40:38 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:45.967 00:40:38 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:06:45.967 00:40:38 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:45.967 00:40:38 -- common/autotest_common.sh@914 -- # local i=0 00:06:45.967 00:40:38 -- common/autotest_common.sh@915 -- # local force 00:06:45.967 00:40:38 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:06:45.967 00:40:38 -- common/autotest_common.sh@920 -- # force=-f 00:06:45.967 00:40:38 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:46.227 btrfs-progs v6.6.2 00:06:46.227 See https://btrfs.readthedocs.io for more information. 00:06:46.227 00:06:46.227 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:46.227 NOTE: several default settings have changed in version 5.15, please make sure 00:06:46.227 this does not affect your deployments: 00:06:46.227 - DUP for metadata (-m dup) 00:06:46.227 - enabled no-holes (-O no-holes) 00:06:46.227 - enabled free-space-tree (-R free-space-tree) 00:06:46.227 00:06:46.227 Label: (null) 00:06:46.227 UUID: 26b7f2e8-ea0b-48a5-8019-32892686382e 00:06:46.227 Node size: 16384 00:06:46.227 Sector size: 4096 00:06:46.227 Filesystem size: 510.00MiB 00:06:46.227 Block group profiles: 00:06:46.227 Data: single 8.00MiB 00:06:46.227 Metadata: DUP 32.00MiB 00:06:46.227 System: DUP 8.00MiB 00:06:46.227 SSD detected: yes 00:06:46.227 Zoned device: no 00:06:46.227 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:46.227 Runtime features: free-space-tree 00:06:46.227 Checksum: crc32c 00:06:46.227 Number of devices: 1 00:06:46.227 Devices: 00:06:46.227 ID SIZE PATH 00:06:46.227 1 510.00MiB /dev/nvme0n1p1 00:06:46.227 00:06:46.227 00:40:38 -- common/autotest_common.sh@931 -- # return 0 00:06:46.227 00:40:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:47.605 00:40:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:47.605 00:40:39 -- target/filesystem.sh@25 -- # sync 00:06:47.605 00:40:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:47.605 00:40:39 -- target/filesystem.sh@27 -- # sync 00:06:47.605 00:40:39 -- target/filesystem.sh@29 -- # i=0 00:06:47.605 00:40:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:47.605 00:40:39 -- target/filesystem.sh@37 -- # kill -0 1537579 00:06:47.605 00:40:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:47.605 00:40:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:47.605 00:40:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:47.605 00:40:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:47.605 00:06:47.605 real 0m1.373s 00:06:47.605 user 0m0.031s 00:06:47.605 sys 0m0.120s 00:06:47.605 00:40:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.605 00:40:39 -- common/autotest_common.sh@10 -- # set +x 00:06:47.605 ************************************ 00:06:47.605 END TEST filesystem_btrfs 00:06:47.605 ************************************ 00:06:47.605 00:40:40 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:47.605 00:40:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:47.605 00:40:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.606 00:40:40 -- common/autotest_common.sh@10 -- # set +x 00:06:47.606 ************************************ 00:06:47.606 START TEST filesystem_xfs 00:06:47.606 ************************************ 00:06:47.606 00:40:40 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:06:47.606 00:40:40 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:47.606 00:40:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:47.606 00:40:40 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:47.606 00:40:40 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:06:47.606 00:40:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:47.606 00:40:40 -- common/autotest_common.sh@914 -- # local i=0 00:06:47.606 00:40:40 -- common/autotest_common.sh@915 -- # local force 00:06:47.606 00:40:40 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:06:47.606 00:40:40 -- common/autotest_common.sh@920 -- # force=-f 00:06:47.606 00:40:40 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:47.606 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:47.606 = sectsz=512 attr=2, projid32bit=1 00:06:47.606 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:47.606 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:47.606 data = bsize=4096 blocks=130560, imaxpct=25 00:06:47.606 = sunit=0 swidth=0 blks 00:06:47.606 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:47.606 log =internal log bsize=4096 blocks=16384, version=2 00:06:47.606 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:47.606 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:48.545 Discarding blocks...Done. 00:06:48.545 00:40:41 -- common/autotest_common.sh@931 -- # return 0 00:06:48.545 00:40:41 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:50.498 00:40:42 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:50.498 00:40:42 -- target/filesystem.sh@25 -- # sync 00:06:50.498 00:40:42 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:50.498 00:40:42 -- target/filesystem.sh@27 -- # sync 00:06:50.498 00:40:42 -- target/filesystem.sh@29 -- # i=0 00:06:50.498 00:40:42 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:50.498 00:40:42 -- target/filesystem.sh@37 -- # kill -0 1537579 00:06:50.498 00:40:42 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:50.498 00:40:42 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:50.498 00:40:42 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:50.498 00:40:42 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:50.498 00:06:50.498 real 0m2.814s 00:06:50.498 user 0m0.022s 00:06:50.498 sys 0m0.074s 00:06:50.498 00:40:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.498 00:40:42 -- common/autotest_common.sh@10 -- # set +x 00:06:50.498 ************************************ 00:06:50.498 END TEST filesystem_xfs 00:06:50.498 ************************************ 00:06:50.498 00:40:43 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:50.757 00:40:43 -- target/filesystem.sh@93 -- # sync 00:06:50.757 00:40:43 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:50.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:50.757 00:40:43 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:50.757 00:40:43 -- common/autotest_common.sh@1205 -- # local i=0 00:06:50.757 00:40:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:06:50.757 00:40:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:50.757 00:40:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:06:50.757 00:40:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:50.757 00:40:43 -- common/autotest_common.sh@1217 -- # return 0 00:06:50.757 00:40:43 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:50.757 00:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:50.757 00:40:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.016 00:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.016 00:40:43 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:51.016 00:40:43 -- target/filesystem.sh@101 -- # killprocess 1537579 00:06:51.016 00:40:43 -- common/autotest_common.sh@936 -- # '[' -z 1537579 ']' 00:06:51.016 00:40:43 -- common/autotest_common.sh@940 -- # kill -0 1537579 00:06:51.016 00:40:43 -- common/autotest_common.sh@941 -- # uname 00:06:51.016 00:40:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:51.016 00:40:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1537579 00:06:51.016 00:40:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:51.016 00:40:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:51.016 00:40:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1537579' 00:06:51.016 killing process with pid 1537579 00:06:51.016 00:40:43 -- common/autotest_common.sh@955 -- # kill 1537579 00:06:51.016 00:40:43 -- common/autotest_common.sh@960 -- # wait 1537579 00:06:51.276 00:40:43 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:51.276 00:06:51.276 real 0m13.588s 00:06:51.276 user 0m53.467s 00:06:51.276 sys 0m1.406s 00:06:51.276 00:40:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.276 00:40:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.276 ************************************ 00:06:51.276 END TEST nvmf_filesystem_no_in_capsule 00:06:51.276 ************************************ 00:06:51.276 00:40:43 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:51.276 00:40:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:51.276 00:40:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.276 00:40:43 -- common/autotest_common.sh@10 -- # set +x 00:06:51.536 ************************************ 00:06:51.536 START TEST nvmf_filesystem_in_capsule 00:06:51.536 ************************************ 00:06:51.536 00:40:44 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:06:51.536 00:40:44 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:51.536 00:40:44 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:51.536 00:40:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:06:51.536 00:40:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:51.536 00:40:44 -- common/autotest_common.sh@10 -- # set +x 00:06:51.536 00:40:44 -- nvmf/common.sh@470 -- # nvmfpid=1540135 00:06:51.536 00:40:44 -- nvmf/common.sh@471 -- # waitforlisten 1540135 00:06:51.536 00:40:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.536 00:40:44 -- common/autotest_common.sh@817 -- # '[' -z 1540135 ']' 00:06:51.536 00:40:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.536 00:40:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:51.536 00:40:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.536 00:40:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:51.536 00:40:44 -- common/autotest_common.sh@10 -- # set +x 00:06:51.536 [2024-04-27 00:40:44.092310] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:51.536 [2024-04-27 00:40:44.092346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.536 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.536 [2024-04-27 00:40:44.148291] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.536 [2024-04-27 00:40:44.226338] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.536 [2024-04-27 00:40:44.226377] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.536 [2024-04-27 00:40:44.226384] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.536 [2024-04-27 00:40:44.226390] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.536 [2024-04-27 00:40:44.226395] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.536 [2024-04-27 00:40:44.226435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.536 [2024-04-27 00:40:44.226529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.536 [2024-04-27 00:40:44.226626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.536 [2024-04-27 00:40:44.226627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.475 00:40:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:52.475 00:40:44 -- common/autotest_common.sh@850 -- # return 0 00:06:52.475 00:40:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:06:52.475 00:40:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:52.475 00:40:44 -- common/autotest_common.sh@10 -- # set +x 00:06:52.475 00:40:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:52.475 00:40:44 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:52.475 00:40:44 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:52.475 00:40:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.475 00:40:44 -- common/autotest_common.sh@10 -- # set +x 00:06:52.475 [2024-04-27 00:40:44.933994] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:52.475 00:40:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.475 00:40:44 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:52.475 00:40:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.476 00:40:44 -- common/autotest_common.sh@10 -- # set +x 00:06:52.476 Malloc1 00:06:52.476 00:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.476 00:40:45 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:52.476 00:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.476 00:40:45 -- common/autotest_common.sh@10 -- # set +x 00:06:52.476 00:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.476 00:40:45 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:52.476 00:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.476 00:40:45 -- common/autotest_common.sh@10 -- # set +x 00:06:52.476 00:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.476 00:40:45 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:52.476 00:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.476 00:40:45 -- common/autotest_common.sh@10 -- # set +x 00:06:52.476 [2024-04-27 00:40:45.082688] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.476 00:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.476 00:40:45 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:52.476 00:40:45 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:06:52.476 00:40:45 -- common/autotest_common.sh@1365 -- # local bdev_info 00:06:52.476 00:40:45 -- common/autotest_common.sh@1366 -- # local bs 00:06:52.476 00:40:45 -- common/autotest_common.sh@1367 -- # local nb 00:06:52.476 00:40:45 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:52.476 00:40:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:52.476 00:40:45 -- common/autotest_common.sh@10 -- # set +x 00:06:52.476 00:40:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:52.476 00:40:45 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:06:52.476 { 00:06:52.476 "name": "Malloc1", 00:06:52.476 "aliases": [ 00:06:52.476 "6698809d-1157-4c0e-a712-6b1da16fde21" 00:06:52.476 ], 00:06:52.476 "product_name": "Malloc disk", 00:06:52.476 "block_size": 512, 00:06:52.476 "num_blocks": 1048576, 00:06:52.476 "uuid": "6698809d-1157-4c0e-a712-6b1da16fde21", 00:06:52.476 "assigned_rate_limits": { 00:06:52.476 "rw_ios_per_sec": 0, 00:06:52.476 "rw_mbytes_per_sec": 0, 00:06:52.476 "r_mbytes_per_sec": 0, 00:06:52.476 "w_mbytes_per_sec": 0 00:06:52.476 }, 00:06:52.476 "claimed": true, 00:06:52.476 "claim_type": "exclusive_write", 00:06:52.476 "zoned": false, 00:06:52.476 "supported_io_types": { 00:06:52.476 "read": true, 00:06:52.476 "write": true, 00:06:52.476 "unmap": true, 00:06:52.476 "write_zeroes": true, 00:06:52.476 "flush": true, 00:06:52.476 "reset": true, 00:06:52.476 "compare": false, 00:06:52.476 "compare_and_write": false, 00:06:52.476 "abort": true, 00:06:52.476 "nvme_admin": false, 00:06:52.476 "nvme_io": false 00:06:52.476 }, 00:06:52.476 "memory_domains": [ 00:06:52.476 { 00:06:52.476 "dma_device_id": "system", 00:06:52.476 "dma_device_type": 1 00:06:52.476 }, 00:06:52.476 { 00:06:52.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.476 "dma_device_type": 2 00:06:52.476 } 00:06:52.476 ], 00:06:52.476 "driver_specific": {} 00:06:52.476 } 00:06:52.476 ]' 00:06:52.476 00:40:45 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:06:52.476 00:40:45 -- common/autotest_common.sh@1369 -- # bs=512 00:06:52.476 00:40:45 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:06:52.736 00:40:45 -- common/autotest_common.sh@1370 -- # nb=1048576 00:06:52.736 00:40:45 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:06:52.736 00:40:45 -- common/autotest_common.sh@1374 -- # echo 512 00:06:52.736 00:40:45 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:52.736 00:40:45 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:54.115 00:40:46 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:54.115 00:40:46 -- common/autotest_common.sh@1184 -- # local i=0 00:06:54.115 00:40:46 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:06:54.115 00:40:46 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:06:54.115 00:40:46 -- common/autotest_common.sh@1191 -- # sleep 2 00:06:56.019 00:40:48 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:06:56.019 00:40:48 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:06:56.019 00:40:48 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:06:56.019 00:40:48 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:06:56.019 00:40:48 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:06:56.019 00:40:48 -- common/autotest_common.sh@1194 -- # return 0 00:06:56.019 00:40:48 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:56.019 00:40:48 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:56.019 00:40:48 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:56.019 00:40:48 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:56.019 00:40:48 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:56.019 00:40:48 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:56.019 00:40:48 -- setup/common.sh@80 -- # echo 536870912 00:06:56.019 00:40:48 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:56.019 00:40:48 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:56.019 00:40:48 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:56.020 00:40:48 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:56.020 00:40:48 -- target/filesystem.sh@69 -- # partprobe 00:06:56.587 00:40:48 -- target/filesystem.sh@70 -- # sleep 1 00:06:57.525 00:40:49 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:57.525 00:40:49 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:57.525 00:40:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:57.525 00:40:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.525 00:40:49 -- common/autotest_common.sh@10 -- # set +x 00:06:57.525 ************************************ 00:06:57.525 START TEST filesystem_in_capsule_ext4 00:06:57.525 ************************************ 00:06:57.525 00:40:50 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:57.525 00:40:50 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:57.525 00:40:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:57.525 00:40:50 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:57.525 00:40:50 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:06:57.525 00:40:50 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:06:57.525 00:40:50 -- common/autotest_common.sh@914 -- # local i=0 00:06:57.525 00:40:50 -- common/autotest_common.sh@915 -- # local force 00:06:57.525 00:40:50 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:06:57.525 00:40:50 -- common/autotest_common.sh@918 -- # force=-F 00:06:57.525 00:40:50 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:57.525 mke2fs 1.46.5 (30-Dec-2021) 00:06:57.525 Discarding device blocks: 0/522240 done 00:06:57.784 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:57.784 Filesystem UUID: 87d4ed4c-7c37-47ad-8700-8b3baa276d0d 00:06:57.784 Superblock backups stored on blocks: 00:06:57.784 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:57.784 00:06:57.784 Allocating group tables: 0/64 done 00:06:57.784 Writing inode tables: 0/64 done 00:06:59.690 Creating journal (8192 blocks): done 00:07:00.517 Writing superblocks and filesystem accounting information: 0/64 done 00:07:00.517 00:07:00.517 00:40:52 -- common/autotest_common.sh@931 -- # return 0 00:07:00.517 00:40:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.517 00:40:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.776 00:40:53 -- target/filesystem.sh@25 -- # sync 00:07:00.776 00:40:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.776 00:40:53 -- target/filesystem.sh@27 -- # sync 00:07:00.776 00:40:53 -- target/filesystem.sh@29 -- # i=0 00:07:00.776 00:40:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.776 00:40:53 -- target/filesystem.sh@37 -- # kill -0 1540135 00:07:00.776 00:40:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.776 00:40:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.776 00:40:53 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.776 00:40:53 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.776 00:07:00.776 real 0m3.159s 00:07:00.776 user 0m0.018s 00:07:00.776 sys 0m0.072s 00:07:00.776 00:40:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.776 00:40:53 -- common/autotest_common.sh@10 -- # set +x 00:07:00.776 ************************************ 00:07:00.776 END TEST filesystem_in_capsule_ext4 00:07:00.776 ************************************ 00:07:00.776 00:40:53 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:00.776 00:40:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:00.776 00:40:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.776 00:40:53 -- common/autotest_common.sh@10 -- # set +x 00:07:00.776 ************************************ 00:07:00.776 START TEST filesystem_in_capsule_btrfs 00:07:00.776 ************************************ 00:07:00.776 00:40:53 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:00.776 00:40:53 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:00.776 00:40:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.776 00:40:53 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:00.776 00:40:53 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:00.776 00:40:53 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:00.776 00:40:53 -- common/autotest_common.sh@914 -- # local i=0 00:07:00.776 00:40:53 -- common/autotest_common.sh@915 -- # local force 00:07:00.776 00:40:53 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:00.776 00:40:53 -- common/autotest_common.sh@920 -- # force=-f 00:07:00.776 00:40:53 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:01.344 btrfs-progs v6.6.2 00:07:01.344 See https://btrfs.readthedocs.io for more information. 00:07:01.344 00:07:01.344 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:01.344 NOTE: several default settings have changed in version 5.15, please make sure 00:07:01.344 this does not affect your deployments: 00:07:01.344 - DUP for metadata (-m dup) 00:07:01.344 - enabled no-holes (-O no-holes) 00:07:01.344 - enabled free-space-tree (-R free-space-tree) 00:07:01.344 00:07:01.344 Label: (null) 00:07:01.344 UUID: 0fb2559d-e7c6-48a9-8103-f7efbefbc831 00:07:01.344 Node size: 16384 00:07:01.344 Sector size: 4096 00:07:01.344 Filesystem size: 510.00MiB 00:07:01.344 Block group profiles: 00:07:01.344 Data: single 8.00MiB 00:07:01.344 Metadata: DUP 32.00MiB 00:07:01.344 System: DUP 8.00MiB 00:07:01.344 SSD detected: yes 00:07:01.344 Zoned device: no 00:07:01.344 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:01.344 Runtime features: free-space-tree 00:07:01.344 Checksum: crc32c 00:07:01.344 Number of devices: 1 00:07:01.344 Devices: 00:07:01.344 ID SIZE PATH 00:07:01.344 1 510.00MiB /dev/nvme0n1p1 00:07:01.344 00:07:01.344 00:40:53 -- common/autotest_common.sh@931 -- # return 0 00:07:01.344 00:40:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:02.283 00:40:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:02.283 00:40:54 -- target/filesystem.sh@25 -- # sync 00:07:02.283 00:40:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:02.283 00:40:54 -- target/filesystem.sh@27 -- # sync 00:07:02.283 00:40:54 -- target/filesystem.sh@29 -- # i=0 00:07:02.283 00:40:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:02.283 00:40:54 -- target/filesystem.sh@37 -- # kill -0 1540135 00:07:02.283 00:40:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:02.283 00:40:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:02.283 00:40:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:02.283 00:40:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:02.283 00:07:02.283 real 0m1.252s 00:07:02.283 user 0m0.025s 00:07:02.283 sys 0m0.127s 00:07:02.283 00:40:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.283 00:40:54 -- common/autotest_common.sh@10 -- # set +x 00:07:02.283 ************************************ 00:07:02.283 END TEST filesystem_in_capsule_btrfs 00:07:02.283 ************************************ 00:07:02.283 00:40:54 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:02.283 00:40:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:02.283 00:40:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.283 00:40:54 -- common/autotest_common.sh@10 -- # set +x 00:07:02.283 ************************************ 00:07:02.283 START TEST filesystem_in_capsule_xfs 00:07:02.283 ************************************ 00:07:02.283 00:40:54 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:02.283 00:40:54 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:02.283 00:40:54 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:02.283 00:40:54 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:02.283 00:40:54 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:02.283 00:40:54 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:02.283 00:40:54 -- common/autotest_common.sh@914 -- # local i=0 00:07:02.283 00:40:54 -- common/autotest_common.sh@915 -- # local force 00:07:02.283 00:40:54 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:02.283 00:40:54 -- common/autotest_common.sh@920 -- # force=-f 00:07:02.283 00:40:54 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:02.283 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:02.283 = sectsz=512 attr=2, projid32bit=1 00:07:02.283 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:02.283 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:02.283 data = bsize=4096 blocks=130560, imaxpct=25 00:07:02.283 = sunit=0 swidth=0 blks 00:07:02.283 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:02.283 log =internal log bsize=4096 blocks=16384, version=2 00:07:02.283 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:02.283 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:03.222 Discarding blocks...Done. 00:07:03.222 00:40:55 -- common/autotest_common.sh@931 -- # return 0 00:07:03.222 00:40:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:05.128 00:40:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:05.128 00:40:57 -- target/filesystem.sh@25 -- # sync 00:07:05.128 00:40:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:05.128 00:40:57 -- target/filesystem.sh@27 -- # sync 00:07:05.128 00:40:57 -- target/filesystem.sh@29 -- # i=0 00:07:05.128 00:40:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:05.128 00:40:57 -- target/filesystem.sh@37 -- # kill -0 1540135 00:07:05.128 00:40:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:05.128 00:40:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:05.128 00:40:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:05.128 00:40:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:05.128 00:07:05.128 real 0m2.787s 00:07:05.128 user 0m0.020s 00:07:05.128 sys 0m0.076s 00:07:05.128 00:40:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.128 00:40:57 -- common/autotest_common.sh@10 -- # set +x 00:07:05.128 ************************************ 00:07:05.128 END TEST filesystem_in_capsule_xfs 00:07:05.128 ************************************ 00:07:05.128 00:40:57 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:05.128 00:40:57 -- target/filesystem.sh@93 -- # sync 00:07:05.128 00:40:57 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:05.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:05.388 00:40:57 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:05.388 00:40:57 -- common/autotest_common.sh@1205 -- # local i=0 00:07:05.388 00:40:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:05.388 00:40:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.388 00:40:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:05.388 00:40:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.388 00:40:57 -- common/autotest_common.sh@1217 -- # return 0 00:07:05.388 00:40:57 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.388 00:40:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:05.388 00:40:57 -- common/autotest_common.sh@10 -- # set +x 00:07:05.388 00:40:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:05.388 00:40:57 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:05.388 00:40:57 -- target/filesystem.sh@101 -- # killprocess 1540135 00:07:05.388 00:40:57 -- common/autotest_common.sh@936 -- # '[' -z 1540135 ']' 00:07:05.388 00:40:57 -- common/autotest_common.sh@940 -- # kill -0 1540135 00:07:05.388 00:40:57 -- common/autotest_common.sh@941 -- # uname 00:07:05.388 00:40:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.388 00:40:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1540135 00:07:05.388 00:40:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:05.388 00:40:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:05.388 00:40:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1540135' 00:07:05.388 killing process with pid 1540135 00:07:05.388 00:40:57 -- common/autotest_common.sh@955 -- # kill 1540135 00:07:05.388 00:40:57 -- common/autotest_common.sh@960 -- # wait 1540135 00:07:05.957 00:40:58 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:05.957 00:07:05.957 real 0m14.319s 00:07:05.957 user 0m56.355s 00:07:05.957 sys 0m1.406s 00:07:05.957 00:40:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.957 00:40:58 -- common/autotest_common.sh@10 -- # set +x 00:07:05.957 ************************************ 00:07:05.957 END TEST nvmf_filesystem_in_capsule 00:07:05.957 ************************************ 00:07:05.957 00:40:58 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:05.957 00:40:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:05.957 00:40:58 -- nvmf/common.sh@117 -- # sync 00:07:05.957 00:40:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:05.957 00:40:58 -- nvmf/common.sh@120 -- # set +e 00:07:05.957 00:40:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:05.957 00:40:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:05.957 rmmod nvme_tcp 00:07:05.957 rmmod nvme_fabrics 00:07:05.957 rmmod nvme_keyring 00:07:05.957 00:40:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:05.957 00:40:58 -- nvmf/common.sh@124 -- # set -e 00:07:05.957 00:40:58 -- nvmf/common.sh@125 -- # return 0 00:07:05.957 00:40:58 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:05.957 00:40:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:05.957 00:40:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:05.957 00:40:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:05.957 00:40:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:05.957 00:40:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:05.957 00:40:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.957 00:40:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.957 00:40:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.862 00:41:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:07.862 00:07:07.862 real 0m35.895s 00:07:07.862 user 1m51.469s 00:07:07.862 sys 0m7.077s 00:07:07.862 00:41:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.862 00:41:00 -- common/autotest_common.sh@10 -- # set +x 00:07:07.862 ************************************ 00:07:07.862 END TEST nvmf_filesystem 00:07:07.862 ************************************ 00:07:07.862 00:41:00 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:07.862 00:41:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:07.862 00:41:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.862 00:41:00 -- common/autotest_common.sh@10 -- # set +x 00:07:08.119 ************************************ 00:07:08.119 START TEST nvmf_discovery 00:07:08.119 ************************************ 00:07:08.119 00:41:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:08.119 * Looking for test storage... 00:07:08.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.119 00:41:00 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.119 00:41:00 -- nvmf/common.sh@7 -- # uname -s 00:07:08.119 00:41:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.119 00:41:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.119 00:41:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.119 00:41:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.119 00:41:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.119 00:41:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.119 00:41:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.119 00:41:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.119 00:41:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.119 00:41:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.119 00:41:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:08.119 00:41:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:08.119 00:41:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.119 00:41:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.119 00:41:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.119 00:41:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.119 00:41:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.119 00:41:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.119 00:41:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.119 00:41:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.119 00:41:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.119 00:41:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.119 00:41:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.119 00:41:00 -- paths/export.sh@5 -- # export PATH 00:07:08.119 00:41:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.119 00:41:00 -- nvmf/common.sh@47 -- # : 0 00:07:08.119 00:41:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.119 00:41:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.119 00:41:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.119 00:41:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.119 00:41:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.119 00:41:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.119 00:41:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.119 00:41:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.119 00:41:00 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:08.119 00:41:00 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:08.119 00:41:00 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:08.119 00:41:00 -- target/discovery.sh@15 -- # hash nvme 00:07:08.119 00:41:00 -- target/discovery.sh@20 -- # nvmftestinit 00:07:08.119 00:41:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:08.119 00:41:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.119 00:41:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:08.119 00:41:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:08.119 00:41:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:08.119 00:41:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.119 00:41:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.119 00:41:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.119 00:41:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:08.119 00:41:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:08.119 00:41:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:08.119 00:41:00 -- common/autotest_common.sh@10 -- # set +x 00:07:13.407 00:41:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:13.407 00:41:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:13.407 00:41:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:13.407 00:41:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:13.407 00:41:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:13.407 00:41:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:13.407 00:41:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:13.407 00:41:05 -- nvmf/common.sh@295 -- # net_devs=() 00:07:13.407 00:41:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:13.407 00:41:05 -- nvmf/common.sh@296 -- # e810=() 00:07:13.407 00:41:05 -- nvmf/common.sh@296 -- # local -ga e810 00:07:13.407 00:41:05 -- nvmf/common.sh@297 -- # x722=() 00:07:13.407 00:41:05 -- nvmf/common.sh@297 -- # local -ga x722 00:07:13.407 00:41:05 -- nvmf/common.sh@298 -- # mlx=() 00:07:13.407 00:41:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:13.407 00:41:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.407 00:41:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:13.407 00:41:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:13.407 00:41:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:13.407 00:41:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.407 00:41:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:13.407 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:13.407 00:41:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.407 00:41:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:13.407 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:13.407 00:41:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.407 00:41:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.407 00:41:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.407 00:41:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:13.407 00:41:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.407 00:41:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:13.407 Found net devices under 0000:86:00.0: cvl_0_0 00:07:13.407 00:41:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.407 00:41:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.407 00:41:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.407 00:41:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:13.407 00:41:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.407 00:41:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:13.407 Found net devices under 0000:86:00.1: cvl_0_1 00:07:13.407 00:41:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.407 00:41:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:13.407 00:41:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:13.407 00:41:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:13.407 00:41:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:13.407 00:41:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.407 00:41:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.407 00:41:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.407 00:41:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:13.407 00:41:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.407 00:41:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.407 00:41:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:13.407 00:41:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.407 00:41:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.407 00:41:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:13.407 00:41:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:13.407 00:41:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.407 00:41:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.407 00:41:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.407 00:41:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.407 00:41:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:13.407 00:41:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.407 00:41:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.407 00:41:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.407 00:41:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:13.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:07:13.407 00:07:13.407 --- 10.0.0.2 ping statistics --- 00:07:13.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.407 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:13.407 00:41:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:07:13.407 00:07:13.407 --- 10.0.0.1 ping statistics --- 00:07:13.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.407 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:07:13.407 00:41:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.407 00:41:06 -- nvmf/common.sh@411 -- # return 0 00:07:13.407 00:41:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:13.407 00:41:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.407 00:41:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:13.407 00:41:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:13.407 00:41:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.408 00:41:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:13.408 00:41:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:13.408 00:41:06 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:13.408 00:41:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:13.408 00:41:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:13.408 00:41:06 -- common/autotest_common.sh@10 -- # set +x 00:07:13.665 00:41:06 -- nvmf/common.sh@470 -- # nvmfpid=1546091 00:07:13.665 00:41:06 -- nvmf/common.sh@471 -- # waitforlisten 1546091 00:07:13.665 00:41:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.665 00:41:06 -- common/autotest_common.sh@817 -- # '[' -z 1546091 ']' 00:07:13.665 00:41:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.665 00:41:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:13.665 00:41:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.665 00:41:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:13.665 00:41:06 -- common/autotest_common.sh@10 -- # set +x 00:07:13.665 [2024-04-27 00:41:06.159904] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:07:13.665 [2024-04-27 00:41:06.159950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.665 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.665 [2024-04-27 00:41:06.217012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.665 [2024-04-27 00:41:06.295124] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.665 [2024-04-27 00:41:06.295160] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.665 [2024-04-27 00:41:06.295167] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.665 [2024-04-27 00:41:06.295173] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.666 [2024-04-27 00:41:06.295179] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.666 [2024-04-27 00:41:06.295217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.666 [2024-04-27 00:41:06.295235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.666 [2024-04-27 00:41:06.295324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.666 [2024-04-27 00:41:06.295325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.597 00:41:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.597 00:41:06 -- common/autotest_common.sh@850 -- # return 0 00:07:14.597 00:41:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:14.597 00:41:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:14.597 00:41:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 00:41:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.597 00:41:06 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:14.597 00:41:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.597 00:41:06 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 [2024-04-27 00:41:07.000828] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.597 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.597 00:41:07 -- target/discovery.sh@26 -- # seq 1 4 00:07:14.597 00:41:07 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:14.597 00:41:07 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:14.597 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.597 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 Null1 00:07:14.597 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.597 00:41:07 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:14.597 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.597 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.597 00:41:07 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:14.597 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.597 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.597 00:41:07 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.597 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.597 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 [2024-04-27 00:41:07.050376] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.597 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.597 00:41:07 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:14.597 00:41:07 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:14.597 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.597 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 Null2 00:07:14.597 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.597 00:41:07 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:14.597 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.597 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.597 00:41:07 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:14.597 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.597 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.597 00:41:07 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:14.597 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.597 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.597 00:41:07 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:14.597 00:41:07 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:14.597 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.597 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 Null3 00:07:14.597 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.598 00:41:07 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:14.598 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.598 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.598 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.598 00:41:07 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:14.598 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.598 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.598 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.598 00:41:07 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:14.598 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.598 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.598 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.598 00:41:07 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:14.598 00:41:07 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:14.598 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.598 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.598 Null4 00:07:14.598 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.598 00:41:07 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:14.598 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.598 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.598 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.598 00:41:07 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:14.598 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.598 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.598 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.598 00:41:07 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:14.598 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.598 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.598 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.598 00:41:07 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:14.598 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.598 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.598 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.598 00:41:07 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:14.598 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.598 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.598 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.598 00:41:07 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:14.856 00:07:14.856 Discovery Log Number of Records 6, Generation counter 6 00:07:14.856 =====Discovery Log Entry 0====== 00:07:14.856 trtype: tcp 00:07:14.856 adrfam: ipv4 00:07:14.856 subtype: current discovery subsystem 00:07:14.856 treq: not required 00:07:14.856 portid: 0 00:07:14.856 trsvcid: 4420 00:07:14.856 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:14.856 traddr: 10.0.0.2 00:07:14.856 eflags: explicit discovery connections, duplicate discovery information 00:07:14.856 sectype: none 00:07:14.856 =====Discovery Log Entry 1====== 00:07:14.856 trtype: tcp 00:07:14.856 adrfam: ipv4 00:07:14.856 subtype: nvme subsystem 00:07:14.856 treq: not required 00:07:14.856 portid: 0 00:07:14.856 trsvcid: 4420 00:07:14.856 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:14.856 traddr: 10.0.0.2 00:07:14.856 eflags: none 00:07:14.856 sectype: none 00:07:14.856 =====Discovery Log Entry 2====== 00:07:14.856 trtype: tcp 00:07:14.856 adrfam: ipv4 00:07:14.856 subtype: nvme subsystem 00:07:14.856 treq: not required 00:07:14.856 portid: 0 00:07:14.856 trsvcid: 4420 00:07:14.856 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:14.856 traddr: 10.0.0.2 00:07:14.856 eflags: none 00:07:14.856 sectype: none 00:07:14.856 =====Discovery Log Entry 3====== 00:07:14.856 trtype: tcp 00:07:14.856 adrfam: ipv4 00:07:14.856 subtype: nvme subsystem 00:07:14.856 treq: not required 00:07:14.856 portid: 0 00:07:14.856 trsvcid: 4420 00:07:14.856 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:14.856 traddr: 10.0.0.2 00:07:14.856 eflags: none 00:07:14.856 sectype: none 00:07:14.856 =====Discovery Log Entry 4====== 00:07:14.856 trtype: tcp 00:07:14.856 adrfam: ipv4 00:07:14.856 subtype: nvme subsystem 00:07:14.856 treq: not required 00:07:14.856 portid: 0 00:07:14.856 trsvcid: 4420 00:07:14.856 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:14.856 traddr: 10.0.0.2 00:07:14.856 eflags: none 00:07:14.856 sectype: none 00:07:14.856 =====Discovery Log Entry 5====== 00:07:14.856 trtype: tcp 00:07:14.856 adrfam: ipv4 00:07:14.856 subtype: discovery subsystem referral 00:07:14.856 treq: not required 00:07:14.856 portid: 0 00:07:14.856 trsvcid: 4430 00:07:14.856 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:14.856 traddr: 10.0.0.2 00:07:14.856 eflags: none 00:07:14.856 sectype: none 00:07:14.856 00:41:07 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:14.856 Perform nvmf subsystem discovery via RPC 00:07:14.856 00:41:07 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:14.856 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.856 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.856 [2024-04-27 00:41:07.339203] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:14.856 [ 00:07:14.856 { 00:07:14.856 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:14.856 "subtype": "Discovery", 00:07:14.856 "listen_addresses": [ 00:07:14.856 { 00:07:14.856 "transport": "TCP", 00:07:14.856 "trtype": "TCP", 00:07:14.856 "adrfam": "IPv4", 00:07:14.856 "traddr": "10.0.0.2", 00:07:14.856 "trsvcid": "4420" 00:07:14.856 } 00:07:14.856 ], 00:07:14.856 "allow_any_host": true, 00:07:14.856 "hosts": [] 00:07:14.856 }, 00:07:14.856 { 00:07:14.856 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:14.856 "subtype": "NVMe", 00:07:14.856 "listen_addresses": [ 00:07:14.856 { 00:07:14.856 "transport": "TCP", 00:07:14.856 "trtype": "TCP", 00:07:14.856 "adrfam": "IPv4", 00:07:14.856 "traddr": "10.0.0.2", 00:07:14.856 "trsvcid": "4420" 00:07:14.856 } 00:07:14.856 ], 00:07:14.856 "allow_any_host": true, 00:07:14.856 "hosts": [], 00:07:14.856 "serial_number": "SPDK00000000000001", 00:07:14.856 "model_number": "SPDK bdev Controller", 00:07:14.856 "max_namespaces": 32, 00:07:14.856 "min_cntlid": 1, 00:07:14.856 "max_cntlid": 65519, 00:07:14.856 "namespaces": [ 00:07:14.856 { 00:07:14.856 "nsid": 1, 00:07:14.856 "bdev_name": "Null1", 00:07:14.856 "name": "Null1", 00:07:14.856 "nguid": "1974C64D8F1442D484D01041572292FB", 00:07:14.856 "uuid": "1974c64d-8f14-42d4-84d0-1041572292fb" 00:07:14.856 } 00:07:14.856 ] 00:07:14.856 }, 00:07:14.856 { 00:07:14.856 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:14.856 "subtype": "NVMe", 00:07:14.856 "listen_addresses": [ 00:07:14.856 { 00:07:14.856 "transport": "TCP", 00:07:14.856 "trtype": "TCP", 00:07:14.856 "adrfam": "IPv4", 00:07:14.856 "traddr": "10.0.0.2", 00:07:14.856 "trsvcid": "4420" 00:07:14.856 } 00:07:14.856 ], 00:07:14.856 "allow_any_host": true, 00:07:14.856 "hosts": [], 00:07:14.856 "serial_number": "SPDK00000000000002", 00:07:14.856 "model_number": "SPDK bdev Controller", 00:07:14.856 "max_namespaces": 32, 00:07:14.856 "min_cntlid": 1, 00:07:14.856 "max_cntlid": 65519, 00:07:14.856 "namespaces": [ 00:07:14.856 { 00:07:14.856 "nsid": 1, 00:07:14.856 "bdev_name": "Null2", 00:07:14.856 "name": "Null2", 00:07:14.856 "nguid": "6D40FEC5F53B498A9E72AC898A4E6256", 00:07:14.856 "uuid": "6d40fec5-f53b-498a-9e72-ac898a4e6256" 00:07:14.856 } 00:07:14.856 ] 00:07:14.856 }, 00:07:14.856 { 00:07:14.856 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:14.856 "subtype": "NVMe", 00:07:14.856 "listen_addresses": [ 00:07:14.856 { 00:07:14.856 "transport": "TCP", 00:07:14.856 "trtype": "TCP", 00:07:14.856 "adrfam": "IPv4", 00:07:14.856 "traddr": "10.0.0.2", 00:07:14.856 "trsvcid": "4420" 00:07:14.856 } 00:07:14.856 ], 00:07:14.856 "allow_any_host": true, 00:07:14.856 "hosts": [], 00:07:14.856 "serial_number": "SPDK00000000000003", 00:07:14.856 "model_number": "SPDK bdev Controller", 00:07:14.856 "max_namespaces": 32, 00:07:14.856 "min_cntlid": 1, 00:07:14.856 "max_cntlid": 65519, 00:07:14.856 "namespaces": [ 00:07:14.856 { 00:07:14.856 "nsid": 1, 00:07:14.856 "bdev_name": "Null3", 00:07:14.856 "name": "Null3", 00:07:14.856 "nguid": "F19C5C71CA0849C3841181CB20DCD78F", 00:07:14.856 "uuid": "f19c5c71-ca08-49c3-8411-81cb20dcd78f" 00:07:14.856 } 00:07:14.856 ] 00:07:14.856 }, 00:07:14.856 { 00:07:14.856 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:14.856 "subtype": "NVMe", 00:07:14.856 "listen_addresses": [ 00:07:14.856 { 00:07:14.856 "transport": "TCP", 00:07:14.856 "trtype": "TCP", 00:07:14.856 "adrfam": "IPv4", 00:07:14.856 "traddr": "10.0.0.2", 00:07:14.856 "trsvcid": "4420" 00:07:14.856 } 00:07:14.856 ], 00:07:14.856 "allow_any_host": true, 00:07:14.856 "hosts": [], 00:07:14.856 "serial_number": "SPDK00000000000004", 00:07:14.856 "model_number": "SPDK bdev Controller", 00:07:14.856 "max_namespaces": 32, 00:07:14.856 "min_cntlid": 1, 00:07:14.856 "max_cntlid": 65519, 00:07:14.856 "namespaces": [ 00:07:14.856 { 00:07:14.856 "nsid": 1, 00:07:14.856 "bdev_name": "Null4", 00:07:14.856 "name": "Null4", 00:07:14.856 "nguid": "D029268B936247C29F3D013FA0C7273D", 00:07:14.856 "uuid": "d029268b-9362-47c2-9f3d-013fa0c7273d" 00:07:14.856 } 00:07:14.856 ] 00:07:14.856 } 00:07:14.856 ] 00:07:14.856 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.856 00:41:07 -- target/discovery.sh@42 -- # seq 1 4 00:07:14.856 00:41:07 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:14.856 00:41:07 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.856 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.856 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.856 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.856 00:41:07 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:14.856 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.856 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.856 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.856 00:41:07 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:14.856 00:41:07 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:14.856 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.856 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.856 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.856 00:41:07 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:14.856 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.856 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.856 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.856 00:41:07 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:14.856 00:41:07 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:14.856 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.856 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.856 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.856 00:41:07 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:14.856 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.856 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.856 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.856 00:41:07 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:14.856 00:41:07 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:14.856 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.856 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.856 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.856 00:41:07 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:14.856 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.856 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.856 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.856 00:41:07 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:14.856 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.856 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.857 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.857 00:41:07 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:14.857 00:41:07 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:14.857 00:41:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.857 00:41:07 -- common/autotest_common.sh@10 -- # set +x 00:07:14.857 00:41:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.857 00:41:07 -- target/discovery.sh@49 -- # check_bdevs= 00:07:14.857 00:41:07 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:14.857 00:41:07 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:14.857 00:41:07 -- target/discovery.sh@57 -- # nvmftestfini 00:07:14.857 00:41:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:14.857 00:41:07 -- nvmf/common.sh@117 -- # sync 00:07:14.857 00:41:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.857 00:41:07 -- nvmf/common.sh@120 -- # set +e 00:07:14.857 00:41:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.857 00:41:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.857 rmmod nvme_tcp 00:07:14.857 rmmod nvme_fabrics 00:07:14.857 rmmod nvme_keyring 00:07:14.857 00:41:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.857 00:41:07 -- nvmf/common.sh@124 -- # set -e 00:07:14.857 00:41:07 -- nvmf/common.sh@125 -- # return 0 00:07:14.857 00:41:07 -- nvmf/common.sh@478 -- # '[' -n 1546091 ']' 00:07:14.857 00:41:07 -- nvmf/common.sh@479 -- # killprocess 1546091 00:07:14.857 00:41:07 -- common/autotest_common.sh@936 -- # '[' -z 1546091 ']' 00:07:14.857 00:41:07 -- common/autotest_common.sh@940 -- # kill -0 1546091 00:07:14.857 00:41:07 -- common/autotest_common.sh@941 -- # uname 00:07:14.857 00:41:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.115 00:41:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1546091 00:07:15.115 00:41:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.115 00:41:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.115 00:41:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1546091' 00:07:15.115 killing process with pid 1546091 00:07:15.115 00:41:07 -- common/autotest_common.sh@955 -- # kill 1546091 00:07:15.115 [2024-04-27 00:41:07.591229] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:15.115 00:41:07 -- common/autotest_common.sh@960 -- # wait 1546091 00:07:15.115 00:41:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:15.115 00:41:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:15.115 00:41:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:15.115 00:41:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.115 00:41:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:15.115 00:41:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.115 00:41:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.115 00:41:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.646 00:41:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:17.646 00:07:17.646 real 0m9.190s 00:07:17.646 user 0m7.623s 00:07:17.646 sys 0m4.321s 00:07:17.646 00:41:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.646 00:41:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.646 ************************************ 00:07:17.646 END TEST nvmf_discovery 00:07:17.646 ************************************ 00:07:17.646 00:41:09 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:17.646 00:41:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:17.646 00:41:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.646 00:41:09 -- common/autotest_common.sh@10 -- # set +x 00:07:17.646 ************************************ 00:07:17.646 START TEST nvmf_referrals 00:07:17.646 ************************************ 00:07:17.646 00:41:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:17.646 * Looking for test storage... 00:07:17.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.646 00:41:10 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.646 00:41:10 -- nvmf/common.sh@7 -- # uname -s 00:07:17.646 00:41:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.646 00:41:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.646 00:41:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.646 00:41:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.646 00:41:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.646 00:41:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.646 00:41:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.646 00:41:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.646 00:41:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.646 00:41:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.646 00:41:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:17.646 00:41:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:17.646 00:41:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.646 00:41:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.646 00:41:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.646 00:41:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.646 00:41:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.646 00:41:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.646 00:41:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.646 00:41:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.646 00:41:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.646 00:41:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.646 00:41:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.646 00:41:10 -- paths/export.sh@5 -- # export PATH 00:07:17.646 00:41:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.646 00:41:10 -- nvmf/common.sh@47 -- # : 0 00:07:17.646 00:41:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.646 00:41:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.646 00:41:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.646 00:41:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.646 00:41:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.646 00:41:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.646 00:41:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.646 00:41:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.646 00:41:10 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:17.646 00:41:10 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:17.646 00:41:10 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:17.646 00:41:10 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:17.646 00:41:10 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:17.646 00:41:10 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:17.646 00:41:10 -- target/referrals.sh@37 -- # nvmftestinit 00:07:17.646 00:41:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:17.646 00:41:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.646 00:41:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:17.646 00:41:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:17.646 00:41:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:17.647 00:41:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.647 00:41:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.647 00:41:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.647 00:41:10 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:17.647 00:41:10 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:17.647 00:41:10 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.647 00:41:10 -- common/autotest_common.sh@10 -- # set +x 00:07:22.915 00:41:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:22.915 00:41:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.915 00:41:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.915 00:41:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.915 00:41:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.915 00:41:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.915 00:41:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.915 00:41:15 -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.915 00:41:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.915 00:41:15 -- nvmf/common.sh@296 -- # e810=() 00:07:22.915 00:41:15 -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.915 00:41:15 -- nvmf/common.sh@297 -- # x722=() 00:07:22.915 00:41:15 -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.915 00:41:15 -- nvmf/common.sh@298 -- # mlx=() 00:07:22.915 00:41:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.915 00:41:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.915 00:41:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.915 00:41:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.915 00:41:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.915 00:41:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.915 00:41:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:22.915 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:22.915 00:41:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.915 00:41:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:22.915 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:22.915 00:41:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.915 00:41:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.915 00:41:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.915 00:41:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.915 00:41:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:22.915 00:41:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.915 00:41:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:22.916 Found net devices under 0000:86:00.0: cvl_0_0 00:07:22.916 00:41:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.916 00:41:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.916 00:41:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.916 00:41:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:22.916 00:41:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.916 00:41:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:22.916 Found net devices under 0000:86:00.1: cvl_0_1 00:07:22.916 00:41:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.916 00:41:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:22.916 00:41:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:22.916 00:41:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:22.916 00:41:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:22.916 00:41:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:22.916 00:41:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.916 00:41:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.916 00:41:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.916 00:41:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.916 00:41:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.916 00:41:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.916 00:41:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.916 00:41:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.916 00:41:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.916 00:41:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.916 00:41:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.916 00:41:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.916 00:41:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.175 00:41:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.175 00:41:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.175 00:41:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:23.175 00:41:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.175 00:41:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.175 00:41:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.175 00:41:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:23.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:07:23.175 00:07:23.175 --- 10.0.0.2 ping statistics --- 00:07:23.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.175 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:07:23.175 00:41:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:07:23.175 00:07:23.175 --- 10.0.0.1 ping statistics --- 00:07:23.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.175 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:07:23.175 00:41:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.175 00:41:15 -- nvmf/common.sh@411 -- # return 0 00:07:23.175 00:41:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:23.175 00:41:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.175 00:41:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:23.175 00:41:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:23.175 00:41:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.175 00:41:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:23.175 00:41:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:23.175 00:41:15 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:23.175 00:41:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:23.175 00:41:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:23.175 00:41:15 -- common/autotest_common.sh@10 -- # set +x 00:07:23.175 00:41:15 -- nvmf/common.sh@470 -- # nvmfpid=1549975 00:07:23.175 00:41:15 -- nvmf/common.sh@471 -- # waitforlisten 1549975 00:07:23.175 00:41:15 -- common/autotest_common.sh@817 -- # '[' -z 1549975 ']' 00:07:23.175 00:41:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.175 00:41:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:23.175 00:41:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.175 00:41:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:23.175 00:41:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:23.175 00:41:15 -- common/autotest_common.sh@10 -- # set +x 00:07:23.175 [2024-04-27 00:41:15.863176] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:07:23.175 [2024-04-27 00:41:15.863220] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.433 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.433 [2024-04-27 00:41:15.920790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.433 [2024-04-27 00:41:16.000490] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.433 [2024-04-27 00:41:16.000525] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.433 [2024-04-27 00:41:16.000533] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.433 [2024-04-27 00:41:16.000539] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.433 [2024-04-27 00:41:16.000545] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.433 [2024-04-27 00:41:16.000588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.433 [2024-04-27 00:41:16.000674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.433 [2024-04-27 00:41:16.000699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.433 [2024-04-27 00:41:16.000700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.997 00:41:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:23.997 00:41:16 -- common/autotest_common.sh@850 -- # return 0 00:07:23.997 00:41:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:23.997 00:41:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:23.997 00:41:16 -- common/autotest_common.sh@10 -- # set +x 00:07:24.255 00:41:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.255 00:41:16 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:24.255 00:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.255 00:41:16 -- common/autotest_common.sh@10 -- # set +x 00:07:24.255 [2024-04-27 00:41:16.716856] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.255 00:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.255 00:41:16 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:24.255 00:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.255 00:41:16 -- common/autotest_common.sh@10 -- # set +x 00:07:24.255 [2024-04-27 00:41:16.730292] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:24.255 00:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.255 00:41:16 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:24.255 00:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.255 00:41:16 -- common/autotest_common.sh@10 -- # set +x 00:07:24.255 00:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.255 00:41:16 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:24.255 00:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.255 00:41:16 -- common/autotest_common.sh@10 -- # set +x 00:07:24.255 00:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.255 00:41:16 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:24.255 00:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.255 00:41:16 -- common/autotest_common.sh@10 -- # set +x 00:07:24.255 00:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.255 00:41:16 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:24.255 00:41:16 -- target/referrals.sh@48 -- # jq length 00:07:24.255 00:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.255 00:41:16 -- common/autotest_common.sh@10 -- # set +x 00:07:24.255 00:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.255 00:41:16 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:24.255 00:41:16 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:24.255 00:41:16 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:24.255 00:41:16 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:24.255 00:41:16 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:24.255 00:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.255 00:41:16 -- target/referrals.sh@21 -- # sort 00:07:24.255 00:41:16 -- common/autotest_common.sh@10 -- # set +x 00:07:24.255 00:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.255 00:41:16 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:24.256 00:41:16 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:24.256 00:41:16 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:24.256 00:41:16 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:24.256 00:41:16 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:24.256 00:41:16 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:24.256 00:41:16 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.256 00:41:16 -- target/referrals.sh@26 -- # sort 00:07:24.513 00:41:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:24.513 00:41:17 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:24.513 00:41:17 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:24.513 00:41:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.513 00:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.513 00:41:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.513 00:41:17 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:24.513 00:41:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.513 00:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.513 00:41:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.513 00:41:17 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:24.513 00:41:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.513 00:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.513 00:41:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.513 00:41:17 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:24.513 00:41:17 -- target/referrals.sh@56 -- # jq length 00:07:24.513 00:41:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.513 00:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.513 00:41:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.513 00:41:17 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:24.513 00:41:17 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:24.513 00:41:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:24.513 00:41:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:24.513 00:41:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.513 00:41:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:24.513 00:41:17 -- target/referrals.sh@26 -- # sort 00:07:24.771 00:41:17 -- target/referrals.sh@26 -- # echo 00:07:24.771 00:41:17 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:24.771 00:41:17 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:24.771 00:41:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.771 00:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.771 00:41:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.771 00:41:17 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:24.771 00:41:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.771 00:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.771 00:41:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.771 00:41:17 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:24.771 00:41:17 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:24.771 00:41:17 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:24.771 00:41:17 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:24.771 00:41:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:24.771 00:41:17 -- target/referrals.sh@21 -- # sort 00:07:24.771 00:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:24.771 00:41:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:24.771 00:41:17 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:24.771 00:41:17 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:24.771 00:41:17 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:24.771 00:41:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:24.771 00:41:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:24.771 00:41:17 -- target/referrals.sh@26 -- # sort 00:07:24.771 00:41:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.771 00:41:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:24.771 00:41:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:24.771 00:41:17 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:24.771 00:41:17 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:24.771 00:41:17 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:24.771 00:41:17 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:24.771 00:41:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:24.771 00:41:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:25.028 00:41:17 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:25.028 00:41:17 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:25.028 00:41:17 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:25.028 00:41:17 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:25.028 00:41:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.028 00:41:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:25.028 00:41:17 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:25.028 00:41:17 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:25.028 00:41:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.028 00:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:25.028 00:41:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.285 00:41:17 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:25.285 00:41:17 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:25.285 00:41:17 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.285 00:41:17 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:25.285 00:41:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.285 00:41:17 -- target/referrals.sh@21 -- # sort 00:07:25.285 00:41:17 -- common/autotest_common.sh@10 -- # set +x 00:07:25.285 00:41:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.285 00:41:17 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:25.285 00:41:17 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:25.285 00:41:17 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:25.285 00:41:17 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:25.285 00:41:17 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:25.285 00:41:17 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.285 00:41:17 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.285 00:41:17 -- target/referrals.sh@26 -- # sort 00:07:25.285 00:41:17 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:25.285 00:41:17 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:25.285 00:41:17 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:25.285 00:41:17 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:25.285 00:41:17 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:25.285 00:41:17 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.285 00:41:17 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:25.542 00:41:18 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:25.542 00:41:18 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:25.542 00:41:18 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:25.542 00:41:18 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:25.542 00:41:18 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.542 00:41:18 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:25.542 00:41:18 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:25.542 00:41:18 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:25.543 00:41:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.543 00:41:18 -- common/autotest_common.sh@10 -- # set +x 00:07:25.543 00:41:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.543 00:41:18 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:25.543 00:41:18 -- target/referrals.sh@82 -- # jq length 00:07:25.543 00:41:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:25.543 00:41:18 -- common/autotest_common.sh@10 -- # set +x 00:07:25.543 00:41:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:25.543 00:41:18 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:25.543 00:41:18 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:25.543 00:41:18 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:25.543 00:41:18 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:25.543 00:41:18 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:25.543 00:41:18 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:25.543 00:41:18 -- target/referrals.sh@26 -- # sort 00:07:25.829 00:41:18 -- target/referrals.sh@26 -- # echo 00:07:25.829 00:41:18 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:25.829 00:41:18 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:25.829 00:41:18 -- target/referrals.sh@86 -- # nvmftestfini 00:07:25.829 00:41:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:25.829 00:41:18 -- nvmf/common.sh@117 -- # sync 00:07:25.829 00:41:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:25.829 00:41:18 -- nvmf/common.sh@120 -- # set +e 00:07:25.829 00:41:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:25.829 00:41:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:25.829 rmmod nvme_tcp 00:07:25.829 rmmod nvme_fabrics 00:07:25.829 rmmod nvme_keyring 00:07:25.829 00:41:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:25.829 00:41:18 -- nvmf/common.sh@124 -- # set -e 00:07:25.829 00:41:18 -- nvmf/common.sh@125 -- # return 0 00:07:25.829 00:41:18 -- nvmf/common.sh@478 -- # '[' -n 1549975 ']' 00:07:25.829 00:41:18 -- nvmf/common.sh@479 -- # killprocess 1549975 00:07:25.829 00:41:18 -- common/autotest_common.sh@936 -- # '[' -z 1549975 ']' 00:07:25.829 00:41:18 -- common/autotest_common.sh@940 -- # kill -0 1549975 00:07:25.829 00:41:18 -- common/autotest_common.sh@941 -- # uname 00:07:25.829 00:41:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:25.829 00:41:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1549975 00:07:25.829 00:41:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:25.829 00:41:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:25.829 00:41:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1549975' 00:07:25.829 killing process with pid 1549975 00:07:25.829 00:41:18 -- common/autotest_common.sh@955 -- # kill 1549975 00:07:25.829 00:41:18 -- common/autotest_common.sh@960 -- # wait 1549975 00:07:26.120 00:41:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:26.120 00:41:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:26.120 00:41:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:26.120 00:41:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:26.120 00:41:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:26.120 00:41:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.120 00:41:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.120 00:41:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.030 00:41:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:28.030 00:07:28.030 real 0m10.601s 00:07:28.030 user 0m12.592s 00:07:28.030 sys 0m4.964s 00:07:28.030 00:41:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.030 00:41:20 -- common/autotest_common.sh@10 -- # set +x 00:07:28.030 ************************************ 00:07:28.030 END TEST nvmf_referrals 00:07:28.030 ************************************ 00:07:28.030 00:41:20 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:28.030 00:41:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:28.030 00:41:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.030 00:41:20 -- common/autotest_common.sh@10 -- # set +x 00:07:28.288 ************************************ 00:07:28.288 START TEST nvmf_connect_disconnect 00:07:28.288 ************************************ 00:07:28.288 00:41:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:28.288 * Looking for test storage... 00:07:28.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.288 00:41:20 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.288 00:41:20 -- nvmf/common.sh@7 -- # uname -s 00:07:28.288 00:41:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.288 00:41:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.288 00:41:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.288 00:41:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.288 00:41:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.288 00:41:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.288 00:41:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.288 00:41:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.288 00:41:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.288 00:41:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.288 00:41:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:28.288 00:41:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:28.288 00:41:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.288 00:41:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.288 00:41:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.288 00:41:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.288 00:41:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.288 00:41:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.288 00:41:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.288 00:41:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.288 00:41:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.288 00:41:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.288 00:41:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.288 00:41:20 -- paths/export.sh@5 -- # export PATH 00:07:28.289 00:41:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.289 00:41:20 -- nvmf/common.sh@47 -- # : 0 00:07:28.289 00:41:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.289 00:41:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.289 00:41:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.289 00:41:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.289 00:41:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.289 00:41:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.289 00:41:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.289 00:41:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.289 00:41:20 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.289 00:41:20 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:28.289 00:41:20 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:28.289 00:41:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:28.289 00:41:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.289 00:41:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:28.289 00:41:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:28.289 00:41:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:28.289 00:41:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.289 00:41:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.289 00:41:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.289 00:41:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:28.289 00:41:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:28.289 00:41:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:28.289 00:41:20 -- common/autotest_common.sh@10 -- # set +x 00:07:33.557 00:41:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:33.557 00:41:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:33.557 00:41:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:33.557 00:41:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:33.557 00:41:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:33.557 00:41:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:33.557 00:41:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:33.557 00:41:26 -- nvmf/common.sh@295 -- # net_devs=() 00:07:33.557 00:41:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:33.557 00:41:26 -- nvmf/common.sh@296 -- # e810=() 00:07:33.557 00:41:26 -- nvmf/common.sh@296 -- # local -ga e810 00:07:33.557 00:41:26 -- nvmf/common.sh@297 -- # x722=() 00:07:33.557 00:41:26 -- nvmf/common.sh@297 -- # local -ga x722 00:07:33.557 00:41:26 -- nvmf/common.sh@298 -- # mlx=() 00:07:33.557 00:41:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:33.557 00:41:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.557 00:41:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:33.557 00:41:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:33.557 00:41:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:33.557 00:41:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.557 00:41:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:33.557 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:33.557 00:41:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.557 00:41:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:33.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:33.557 00:41:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:33.557 00:41:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.557 00:41:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.557 00:41:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:33.557 00:41:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.557 00:41:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:33.557 Found net devices under 0000:86:00.0: cvl_0_0 00:07:33.557 00:41:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.557 00:41:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.557 00:41:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.557 00:41:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:33.557 00:41:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.557 00:41:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:33.557 Found net devices under 0000:86:00.1: cvl_0_1 00:07:33.557 00:41:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.557 00:41:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:33.557 00:41:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:33.557 00:41:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:33.557 00:41:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:33.557 00:41:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.557 00:41:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.557 00:41:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.557 00:41:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:33.557 00:41:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.557 00:41:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.557 00:41:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:33.557 00:41:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.557 00:41:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.557 00:41:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:33.557 00:41:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:33.557 00:41:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.557 00:41:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.815 00:41:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.816 00:41:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.816 00:41:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:33.816 00:41:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.816 00:41:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.816 00:41:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.816 00:41:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:33.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:07:33.816 00:07:33.816 --- 10.0.0.2 ping statistics --- 00:07:33.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.816 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:07:33.816 00:41:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:07:33.816 00:07:33.816 --- 10.0.0.1 ping statistics --- 00:07:33.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.816 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:07:33.816 00:41:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.816 00:41:26 -- nvmf/common.sh@411 -- # return 0 00:07:33.816 00:41:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:33.816 00:41:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.816 00:41:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:33.816 00:41:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:33.816 00:41:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.816 00:41:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:33.816 00:41:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:33.816 00:41:26 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:33.816 00:41:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:33.816 00:41:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:33.816 00:41:26 -- common/autotest_common.sh@10 -- # set +x 00:07:33.816 00:41:26 -- nvmf/common.sh@470 -- # nvmfpid=1553866 00:07:33.816 00:41:26 -- nvmf/common.sh@471 -- # waitforlisten 1553866 00:07:33.816 00:41:26 -- common/autotest_common.sh@817 -- # '[' -z 1553866 ']' 00:07:33.816 00:41:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.816 00:41:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:33.816 00:41:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.816 00:41:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:33.816 00:41:26 -- common/autotest_common.sh@10 -- # set +x 00:07:33.816 00:41:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.074 [2024-04-27 00:41:26.544235] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:07:34.074 [2024-04-27 00:41:26.544278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.074 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.074 [2024-04-27 00:41:26.601728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.074 [2024-04-27 00:41:26.675355] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.074 [2024-04-27 00:41:26.675394] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.074 [2024-04-27 00:41:26.675402] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.074 [2024-04-27 00:41:26.675407] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.074 [2024-04-27 00:41:26.675413] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.074 [2024-04-27 00:41:26.675472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.074 [2024-04-27 00:41:26.675489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.074 [2024-04-27 00:41:26.675577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.074 [2024-04-27 00:41:26.675578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.007 00:41:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:35.007 00:41:27 -- common/autotest_common.sh@850 -- # return 0 00:07:35.007 00:41:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:35.007 00:41:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:35.007 00:41:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.007 00:41:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.007 00:41:27 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:35.007 00:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.007 00:41:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.007 [2024-04-27 00:41:27.387022] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.007 00:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.007 00:41:27 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:35.007 00:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.007 00:41:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.007 00:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.007 00:41:27 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:35.007 00:41:27 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:35.007 00:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.007 00:41:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.007 00:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.007 00:41:27 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:35.007 00:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.007 00:41:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.007 00:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.007 00:41:27 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.007 00:41:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.007 00:41:27 -- common/autotest_common.sh@10 -- # set +x 00:07:35.007 [2024-04-27 00:41:27.438691] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.007 00:41:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.007 00:41:27 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:35.007 00:41:27 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:35.007 00:41:27 -- target/connect_disconnect.sh@34 -- # set +x 00:07:38.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.395 00:41:43 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:51.395 00:41:43 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:51.395 00:41:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:51.396 00:41:43 -- nvmf/common.sh@117 -- # sync 00:07:51.396 00:41:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.396 00:41:43 -- nvmf/common.sh@120 -- # set +e 00:07:51.396 00:41:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.396 00:41:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.396 rmmod nvme_tcp 00:07:51.396 rmmod nvme_fabrics 00:07:51.396 rmmod nvme_keyring 00:07:51.396 00:41:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.396 00:41:43 -- nvmf/common.sh@124 -- # set -e 00:07:51.396 00:41:43 -- nvmf/common.sh@125 -- # return 0 00:07:51.396 00:41:43 -- nvmf/common.sh@478 -- # '[' -n 1553866 ']' 00:07:51.396 00:41:43 -- nvmf/common.sh@479 -- # killprocess 1553866 00:07:51.396 00:41:43 -- common/autotest_common.sh@936 -- # '[' -z 1553866 ']' 00:07:51.396 00:41:43 -- common/autotest_common.sh@940 -- # kill -0 1553866 00:07:51.396 00:41:43 -- common/autotest_common.sh@941 -- # uname 00:07:51.396 00:41:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:51.396 00:41:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1553866 00:07:51.396 00:41:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:51.396 00:41:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:51.396 00:41:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1553866' 00:07:51.396 killing process with pid 1553866 00:07:51.396 00:41:43 -- common/autotest_common.sh@955 -- # kill 1553866 00:07:51.396 00:41:43 -- common/autotest_common.sh@960 -- # wait 1553866 00:07:51.655 00:41:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:51.655 00:41:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:51.655 00:41:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:51.655 00:41:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.655 00:41:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:51.655 00:41:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.655 00:41:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.655 00:41:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.560 00:41:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:53.560 00:07:53.560 real 0m25.429s 00:07:53.560 user 1m11.091s 00:07:53.560 sys 0m5.356s 00:07:53.560 00:41:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:53.560 00:41:46 -- common/autotest_common.sh@10 -- # set +x 00:07:53.560 ************************************ 00:07:53.560 END TEST nvmf_connect_disconnect 00:07:53.560 ************************************ 00:07:53.818 00:41:46 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:53.818 00:41:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:53.818 00:41:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.818 00:41:46 -- common/autotest_common.sh@10 -- # set +x 00:07:53.818 ************************************ 00:07:53.818 START TEST nvmf_multitarget 00:07:53.818 ************************************ 00:07:53.818 00:41:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:53.818 * Looking for test storage... 00:07:54.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.078 00:41:46 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.078 00:41:46 -- nvmf/common.sh@7 -- # uname -s 00:07:54.078 00:41:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.078 00:41:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.078 00:41:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.078 00:41:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.078 00:41:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.078 00:41:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.078 00:41:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.078 00:41:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.078 00:41:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.078 00:41:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.078 00:41:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:54.078 00:41:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:54.078 00:41:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.078 00:41:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.078 00:41:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.078 00:41:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.078 00:41:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.078 00:41:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.078 00:41:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.078 00:41:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.078 00:41:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.078 00:41:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.078 00:41:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.078 00:41:46 -- paths/export.sh@5 -- # export PATH 00:07:54.078 00:41:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.078 00:41:46 -- nvmf/common.sh@47 -- # : 0 00:07:54.078 00:41:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.078 00:41:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.078 00:41:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.078 00:41:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.078 00:41:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.078 00:41:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.078 00:41:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.078 00:41:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.078 00:41:46 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:54.078 00:41:46 -- target/multitarget.sh@15 -- # nvmftestinit 00:07:54.078 00:41:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:54.078 00:41:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.078 00:41:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:54.078 00:41:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:54.078 00:41:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:54.078 00:41:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.078 00:41:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.078 00:41:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.078 00:41:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:54.078 00:41:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:54.078 00:41:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.078 00:41:46 -- common/autotest_common.sh@10 -- # set +x 00:07:59.348 00:41:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:59.348 00:41:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.348 00:41:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.348 00:41:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.348 00:41:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.348 00:41:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.348 00:41:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.348 00:41:51 -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.348 00:41:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.348 00:41:51 -- nvmf/common.sh@296 -- # e810=() 00:07:59.348 00:41:51 -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.348 00:41:51 -- nvmf/common.sh@297 -- # x722=() 00:07:59.348 00:41:51 -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.348 00:41:51 -- nvmf/common.sh@298 -- # mlx=() 00:07:59.348 00:41:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.348 00:41:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.348 00:41:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.348 00:41:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.348 00:41:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.348 00:41:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.348 00:41:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:59.348 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:59.348 00:41:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.348 00:41:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:59.348 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:59.348 00:41:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.348 00:41:51 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.348 00:41:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.348 00:41:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:59.348 00:41:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.348 00:41:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:59.348 Found net devices under 0000:86:00.0: cvl_0_0 00:07:59.348 00:41:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.348 00:41:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.348 00:41:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.348 00:41:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:59.348 00:41:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.348 00:41:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:59.348 Found net devices under 0000:86:00.1: cvl_0_1 00:07:59.348 00:41:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.348 00:41:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:59.348 00:41:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:59.348 00:41:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:59.348 00:41:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:59.348 00:41:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.348 00:41:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.348 00:41:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.348 00:41:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.348 00:41:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.348 00:41:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.348 00:41:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.348 00:41:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.348 00:41:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.348 00:41:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.348 00:41:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.348 00:41:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.348 00:41:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.607 00:41:52 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.607 00:41:52 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.607 00:41:52 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.607 00:41:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.607 00:41:52 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.607 00:41:52 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.607 00:41:52 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:07:59.607 00:07:59.607 --- 10.0.0.2 ping statistics --- 00:07:59.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.607 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:07:59.607 00:41:52 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.464 ms 00:07:59.607 00:07:59.607 --- 10.0.0.1 ping statistics --- 00:07:59.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.607 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:07:59.607 00:41:52 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.607 00:41:52 -- nvmf/common.sh@411 -- # return 0 00:07:59.607 00:41:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:59.607 00:41:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.607 00:41:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:59.607 00:41:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:59.607 00:41:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.607 00:41:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:59.607 00:41:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:59.607 00:41:52 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:59.607 00:41:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:59.607 00:41:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:59.607 00:41:52 -- common/autotest_common.sh@10 -- # set +x 00:07:59.607 00:41:52 -- nvmf/common.sh@470 -- # nvmfpid=1560479 00:07:59.607 00:41:52 -- nvmf/common.sh@471 -- # waitforlisten 1560479 00:07:59.607 00:41:52 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.607 00:41:52 -- common/autotest_common.sh@817 -- # '[' -z 1560479 ']' 00:07:59.607 00:41:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.607 00:41:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:59.607 00:41:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.607 00:41:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:59.607 00:41:52 -- common/autotest_common.sh@10 -- # set +x 00:07:59.607 [2024-04-27 00:41:52.273450] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:07:59.607 [2024-04-27 00:41:52.273493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.607 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.864 [2024-04-27 00:41:52.330408] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.864 [2024-04-27 00:41:52.411306] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.864 [2024-04-27 00:41:52.411342] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.864 [2024-04-27 00:41:52.411349] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.864 [2024-04-27 00:41:52.411356] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.864 [2024-04-27 00:41:52.411361] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.864 [2024-04-27 00:41:52.411404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.864 [2024-04-27 00:41:52.411500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.864 [2024-04-27 00:41:52.411587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.864 [2024-04-27 00:41:52.411588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.464 00:41:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:00.464 00:41:53 -- common/autotest_common.sh@850 -- # return 0 00:08:00.464 00:41:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:00.464 00:41:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:00.464 00:41:53 -- common/autotest_common.sh@10 -- # set +x 00:08:00.464 00:41:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.464 00:41:53 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:00.464 00:41:53 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:00.464 00:41:53 -- target/multitarget.sh@21 -- # jq length 00:08:00.720 00:41:53 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:00.720 00:41:53 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:00.720 "nvmf_tgt_1" 00:08:00.720 00:41:53 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:00.720 "nvmf_tgt_2" 00:08:00.977 00:41:53 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:00.977 00:41:53 -- target/multitarget.sh@28 -- # jq length 00:08:00.977 00:41:53 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:00.977 00:41:53 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:00.977 true 00:08:00.977 00:41:53 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:01.233 true 00:08:01.233 00:41:53 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:01.233 00:41:53 -- target/multitarget.sh@35 -- # jq length 00:08:01.233 00:41:53 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:01.233 00:41:53 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:01.233 00:41:53 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:01.233 00:41:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:01.233 00:41:53 -- nvmf/common.sh@117 -- # sync 00:08:01.233 00:41:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:01.233 00:41:53 -- nvmf/common.sh@120 -- # set +e 00:08:01.233 00:41:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:01.233 00:41:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:01.233 rmmod nvme_tcp 00:08:01.233 rmmod nvme_fabrics 00:08:01.233 rmmod nvme_keyring 00:08:01.233 00:41:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:01.233 00:41:53 -- nvmf/common.sh@124 -- # set -e 00:08:01.233 00:41:53 -- nvmf/common.sh@125 -- # return 0 00:08:01.233 00:41:53 -- nvmf/common.sh@478 -- # '[' -n 1560479 ']' 00:08:01.233 00:41:53 -- nvmf/common.sh@479 -- # killprocess 1560479 00:08:01.233 00:41:53 -- common/autotest_common.sh@936 -- # '[' -z 1560479 ']' 00:08:01.233 00:41:53 -- common/autotest_common.sh@940 -- # kill -0 1560479 00:08:01.233 00:41:53 -- common/autotest_common.sh@941 -- # uname 00:08:01.233 00:41:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:01.233 00:41:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1560479 00:08:01.491 00:41:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:01.491 00:41:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:01.491 00:41:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1560479' 00:08:01.491 killing process with pid 1560479 00:08:01.491 00:41:53 -- common/autotest_common.sh@955 -- # kill 1560479 00:08:01.491 00:41:53 -- common/autotest_common.sh@960 -- # wait 1560479 00:08:01.491 00:41:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:01.491 00:41:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:01.491 00:41:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:01.491 00:41:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.491 00:41:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:01.491 00:41:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.491 00:41:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.491 00:41:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.018 00:41:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:04.018 00:08:04.018 real 0m9.775s 00:08:04.018 user 0m9.124s 00:08:04.018 sys 0m4.736s 00:08:04.018 00:41:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.018 00:41:56 -- common/autotest_common.sh@10 -- # set +x 00:08:04.018 ************************************ 00:08:04.018 END TEST nvmf_multitarget 00:08:04.018 ************************************ 00:08:04.018 00:41:56 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:04.018 00:41:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:04.018 00:41:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.018 00:41:56 -- common/autotest_common.sh@10 -- # set +x 00:08:04.018 ************************************ 00:08:04.018 START TEST nvmf_rpc 00:08:04.018 ************************************ 00:08:04.018 00:41:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:04.018 * Looking for test storage... 00:08:04.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.018 00:41:56 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.018 00:41:56 -- nvmf/common.sh@7 -- # uname -s 00:08:04.018 00:41:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.018 00:41:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.018 00:41:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.018 00:41:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.018 00:41:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.018 00:41:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.018 00:41:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.018 00:41:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.018 00:41:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.018 00:41:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.018 00:41:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:04.018 00:41:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:04.018 00:41:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.018 00:41:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.018 00:41:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.018 00:41:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.018 00:41:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.018 00:41:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.018 00:41:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.018 00:41:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.018 00:41:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.018 00:41:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.018 00:41:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.018 00:41:56 -- paths/export.sh@5 -- # export PATH 00:08:04.018 00:41:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.018 00:41:56 -- nvmf/common.sh@47 -- # : 0 00:08:04.018 00:41:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:04.018 00:41:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:04.018 00:41:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.018 00:41:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.018 00:41:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.018 00:41:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:04.018 00:41:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:04.018 00:41:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:04.018 00:41:56 -- target/rpc.sh@11 -- # loops=5 00:08:04.018 00:41:56 -- target/rpc.sh@23 -- # nvmftestinit 00:08:04.018 00:41:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:04.018 00:41:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.018 00:41:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:04.018 00:41:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:04.018 00:41:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:04.018 00:41:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.018 00:41:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.018 00:41:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.018 00:41:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:04.018 00:41:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:04.018 00:41:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:04.018 00:41:56 -- common/autotest_common.sh@10 -- # set +x 00:08:09.288 00:42:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:09.288 00:42:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:09.288 00:42:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:09.288 00:42:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:09.288 00:42:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:09.288 00:42:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:09.288 00:42:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:09.288 00:42:01 -- nvmf/common.sh@295 -- # net_devs=() 00:08:09.288 00:42:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:09.288 00:42:01 -- nvmf/common.sh@296 -- # e810=() 00:08:09.288 00:42:01 -- nvmf/common.sh@296 -- # local -ga e810 00:08:09.288 00:42:01 -- nvmf/common.sh@297 -- # x722=() 00:08:09.288 00:42:01 -- nvmf/common.sh@297 -- # local -ga x722 00:08:09.288 00:42:01 -- nvmf/common.sh@298 -- # mlx=() 00:08:09.288 00:42:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:09.288 00:42:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.288 00:42:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:09.288 00:42:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:09.288 00:42:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:09.288 00:42:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.288 00:42:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:09.288 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:09.288 00:42:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.288 00:42:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:09.288 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:09.288 00:42:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:09.288 00:42:01 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.288 00:42:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.288 00:42:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:09.288 00:42:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.288 00:42:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:09.288 Found net devices under 0000:86:00.0: cvl_0_0 00:08:09.288 00:42:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.288 00:42:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.288 00:42:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.288 00:42:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:09.288 00:42:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.288 00:42:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:09.288 Found net devices under 0000:86:00.1: cvl_0_1 00:08:09.288 00:42:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.288 00:42:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:09.288 00:42:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:09.288 00:42:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:09.288 00:42:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:09.288 00:42:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.288 00:42:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.288 00:42:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.288 00:42:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:09.288 00:42:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.288 00:42:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.288 00:42:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:09.288 00:42:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.288 00:42:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.288 00:42:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:09.288 00:42:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:09.288 00:42:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.288 00:42:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.288 00:42:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.288 00:42:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.288 00:42:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:09.288 00:42:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.288 00:42:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.288 00:42:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.546 00:42:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:09.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:08:09.546 00:08:09.546 --- 10.0.0.2 ping statistics --- 00:08:09.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.546 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:08:09.546 00:42:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:08:09.546 00:08:09.546 --- 10.0.0.1 ping statistics --- 00:08:09.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.546 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:08:09.546 00:42:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.546 00:42:02 -- nvmf/common.sh@411 -- # return 0 00:08:09.546 00:42:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:09.546 00:42:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.546 00:42:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:09.546 00:42:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:09.546 00:42:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.546 00:42:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:09.546 00:42:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:09.546 00:42:02 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:09.546 00:42:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:09.546 00:42:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:09.546 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:08:09.546 00:42:02 -- nvmf/common.sh@470 -- # nvmfpid=1564399 00:08:09.546 00:42:02 -- nvmf/common.sh@471 -- # waitforlisten 1564399 00:08:09.546 00:42:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:09.546 00:42:02 -- common/autotest_common.sh@817 -- # '[' -z 1564399 ']' 00:08:09.546 00:42:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.546 00:42:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:09.546 00:42:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.546 00:42:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:09.546 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:08:09.546 [2024-04-27 00:42:02.091949] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:08:09.546 [2024-04-27 00:42:02.091992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.546 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.546 [2024-04-27 00:42:02.145249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:09.546 [2024-04-27 00:42:02.224806] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.546 [2024-04-27 00:42:02.224842] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.546 [2024-04-27 00:42:02.224848] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.546 [2024-04-27 00:42:02.224854] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.546 [2024-04-27 00:42:02.224859] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.546 [2024-04-27 00:42:02.224899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.546 [2024-04-27 00:42:02.224994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.546 [2024-04-27 00:42:02.225089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.546 [2024-04-27 00:42:02.225091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.553 00:42:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:10.553 00:42:02 -- common/autotest_common.sh@850 -- # return 0 00:08:10.553 00:42:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:10.553 00:42:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:10.553 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:08:10.553 00:42:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.553 00:42:02 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:10.553 00:42:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.553 00:42:02 -- common/autotest_common.sh@10 -- # set +x 00:08:10.553 00:42:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.553 00:42:02 -- target/rpc.sh@26 -- # stats='{ 00:08:10.553 "tick_rate": 2300000000, 00:08:10.553 "poll_groups": [ 00:08:10.553 { 00:08:10.553 "name": "nvmf_tgt_poll_group_0", 00:08:10.553 "admin_qpairs": 0, 00:08:10.553 "io_qpairs": 0, 00:08:10.553 "current_admin_qpairs": 0, 00:08:10.553 "current_io_qpairs": 0, 00:08:10.553 "pending_bdev_io": 0, 00:08:10.553 "completed_nvme_io": 0, 00:08:10.553 "transports": [] 00:08:10.553 }, 00:08:10.553 { 00:08:10.553 "name": "nvmf_tgt_poll_group_1", 00:08:10.553 "admin_qpairs": 0, 00:08:10.553 "io_qpairs": 0, 00:08:10.553 "current_admin_qpairs": 0, 00:08:10.553 "current_io_qpairs": 0, 00:08:10.553 "pending_bdev_io": 0, 00:08:10.553 "completed_nvme_io": 0, 00:08:10.553 "transports": [] 00:08:10.553 }, 00:08:10.553 { 00:08:10.553 "name": "nvmf_tgt_poll_group_2", 00:08:10.553 "admin_qpairs": 0, 00:08:10.553 "io_qpairs": 0, 00:08:10.553 "current_admin_qpairs": 0, 00:08:10.553 "current_io_qpairs": 0, 00:08:10.553 "pending_bdev_io": 0, 00:08:10.553 "completed_nvme_io": 0, 00:08:10.553 "transports": [] 00:08:10.553 }, 00:08:10.553 { 00:08:10.553 "name": "nvmf_tgt_poll_group_3", 00:08:10.553 "admin_qpairs": 0, 00:08:10.553 "io_qpairs": 0, 00:08:10.553 "current_admin_qpairs": 0, 00:08:10.553 "current_io_qpairs": 0, 00:08:10.553 "pending_bdev_io": 0, 00:08:10.553 "completed_nvme_io": 0, 00:08:10.553 "transports": [] 00:08:10.553 } 00:08:10.553 ] 00:08:10.553 }' 00:08:10.553 00:42:02 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:10.553 00:42:02 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:10.553 00:42:02 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:10.553 00:42:02 -- target/rpc.sh@15 -- # wc -l 00:08:10.553 00:42:03 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:10.553 00:42:03 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:10.553 00:42:03 -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:10.553 00:42:03 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.553 00:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.553 00:42:03 -- common/autotest_common.sh@10 -- # set +x 00:08:10.553 [2024-04-27 00:42:03.056279] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.553 00:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.553 00:42:03 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:10.553 00:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.553 00:42:03 -- common/autotest_common.sh@10 -- # set +x 00:08:10.553 00:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.553 00:42:03 -- target/rpc.sh@33 -- # stats='{ 00:08:10.553 "tick_rate": 2300000000, 00:08:10.553 "poll_groups": [ 00:08:10.553 { 00:08:10.553 "name": "nvmf_tgt_poll_group_0", 00:08:10.553 "admin_qpairs": 0, 00:08:10.553 "io_qpairs": 0, 00:08:10.553 "current_admin_qpairs": 0, 00:08:10.553 "current_io_qpairs": 0, 00:08:10.553 "pending_bdev_io": 0, 00:08:10.553 "completed_nvme_io": 0, 00:08:10.553 "transports": [ 00:08:10.553 { 00:08:10.553 "trtype": "TCP" 00:08:10.553 } 00:08:10.553 ] 00:08:10.553 }, 00:08:10.553 { 00:08:10.553 "name": "nvmf_tgt_poll_group_1", 00:08:10.553 "admin_qpairs": 0, 00:08:10.553 "io_qpairs": 0, 00:08:10.553 "current_admin_qpairs": 0, 00:08:10.553 "current_io_qpairs": 0, 00:08:10.553 "pending_bdev_io": 0, 00:08:10.553 "completed_nvme_io": 0, 00:08:10.553 "transports": [ 00:08:10.553 { 00:08:10.553 "trtype": "TCP" 00:08:10.553 } 00:08:10.553 ] 00:08:10.553 }, 00:08:10.553 { 00:08:10.553 "name": "nvmf_tgt_poll_group_2", 00:08:10.553 "admin_qpairs": 0, 00:08:10.553 "io_qpairs": 0, 00:08:10.553 "current_admin_qpairs": 0, 00:08:10.553 "current_io_qpairs": 0, 00:08:10.553 "pending_bdev_io": 0, 00:08:10.553 "completed_nvme_io": 0, 00:08:10.553 "transports": [ 00:08:10.553 { 00:08:10.553 "trtype": "TCP" 00:08:10.553 } 00:08:10.553 ] 00:08:10.553 }, 00:08:10.553 { 00:08:10.553 "name": "nvmf_tgt_poll_group_3", 00:08:10.553 "admin_qpairs": 0, 00:08:10.553 "io_qpairs": 0, 00:08:10.553 "current_admin_qpairs": 0, 00:08:10.553 "current_io_qpairs": 0, 00:08:10.553 "pending_bdev_io": 0, 00:08:10.553 "completed_nvme_io": 0, 00:08:10.553 "transports": [ 00:08:10.553 { 00:08:10.553 "trtype": "TCP" 00:08:10.553 } 00:08:10.553 ] 00:08:10.553 } 00:08:10.553 ] 00:08:10.553 }' 00:08:10.553 00:42:03 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:10.553 00:42:03 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:10.553 00:42:03 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:10.553 00:42:03 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:10.553 00:42:03 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:10.553 00:42:03 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:10.553 00:42:03 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:10.554 00:42:03 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:10.554 00:42:03 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:10.554 00:42:03 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:10.554 00:42:03 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:10.554 00:42:03 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:10.554 00:42:03 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:10.554 00:42:03 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:10.554 00:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.554 00:42:03 -- common/autotest_common.sh@10 -- # set +x 00:08:10.554 Malloc1 00:08:10.554 00:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.554 00:42:03 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:10.554 00:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.554 00:42:03 -- common/autotest_common.sh@10 -- # set +x 00:08:10.554 00:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.554 00:42:03 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:10.554 00:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.554 00:42:03 -- common/autotest_common.sh@10 -- # set +x 00:08:10.554 00:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.554 00:42:03 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:10.554 00:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.554 00:42:03 -- common/autotest_common.sh@10 -- # set +x 00:08:10.554 00:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.554 00:42:03 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.554 00:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.554 00:42:03 -- common/autotest_common.sh@10 -- # set +x 00:08:10.554 [2024-04-27 00:42:03.224258] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.554 00:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.554 00:42:03 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:10.554 00:42:03 -- common/autotest_common.sh@638 -- # local es=0 00:08:10.554 00:42:03 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:10.554 00:42:03 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:10.554 00:42:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:10.554 00:42:03 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:10.554 00:42:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:10.554 00:42:03 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:10.554 00:42:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:10.554 00:42:03 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:10.554 00:42:03 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:10.554 00:42:03 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:10.813 [2024-04-27 00:42:03.253156] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:10.813 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:10.813 could not add new controller: failed to write to nvme-fabrics device 00:08:10.813 00:42:03 -- common/autotest_common.sh@641 -- # es=1 00:08:10.813 00:42:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:10.813 00:42:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:10.813 00:42:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:10.813 00:42:03 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.813 00:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.813 00:42:03 -- common/autotest_common.sh@10 -- # set +x 00:08:10.813 00:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.813 00:42:03 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:12.189 00:42:04 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:12.189 00:42:04 -- common/autotest_common.sh@1184 -- # local i=0 00:08:12.189 00:42:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:12.189 00:42:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:12.189 00:42:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:14.093 00:42:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:14.093 00:42:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:14.093 00:42:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.093 00:42:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:14.093 00:42:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.093 00:42:06 -- common/autotest_common.sh@1194 -- # return 0 00:08:14.093 00:42:06 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:14.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.093 00:42:06 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:14.093 00:42:06 -- common/autotest_common.sh@1205 -- # local i=0 00:08:14.093 00:42:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:14.093 00:42:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:14.093 00:42:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:14.093 00:42:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:14.093 00:42:06 -- common/autotest_common.sh@1217 -- # return 0 00:08:14.093 00:42:06 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:14.093 00:42:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.093 00:42:06 -- common/autotest_common.sh@10 -- # set +x 00:08:14.093 00:42:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.093 00:42:06 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:14.093 00:42:06 -- common/autotest_common.sh@638 -- # local es=0 00:08:14.093 00:42:06 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:14.093 00:42:06 -- common/autotest_common.sh@626 -- # local arg=nvme 00:08:14.093 00:42:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:14.093 00:42:06 -- common/autotest_common.sh@630 -- # type -t nvme 00:08:14.093 00:42:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:14.093 00:42:06 -- common/autotest_common.sh@632 -- # type -P nvme 00:08:14.093 00:42:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:14.093 00:42:06 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:08:14.093 00:42:06 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:08:14.093 00:42:06 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:14.093 [2024-04-27 00:42:06.620536] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:14.093 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:14.093 could not add new controller: failed to write to nvme-fabrics device 00:08:14.093 00:42:06 -- common/autotest_common.sh@641 -- # es=1 00:08:14.093 00:42:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:14.093 00:42:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:14.093 00:42:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:14.093 00:42:06 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:14.093 00:42:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:14.093 00:42:06 -- common/autotest_common.sh@10 -- # set +x 00:08:14.093 00:42:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:14.093 00:42:06 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.470 00:42:07 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.470 00:42:07 -- common/autotest_common.sh@1184 -- # local i=0 00:08:15.470 00:42:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.470 00:42:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:15.470 00:42:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:17.373 00:42:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:17.373 00:42:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:17.373 00:42:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.373 00:42:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:17.373 00:42:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.373 00:42:09 -- common/autotest_common.sh@1194 -- # return 0 00:08:17.373 00:42:09 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.373 00:42:09 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.373 00:42:09 -- common/autotest_common.sh@1205 -- # local i=0 00:08:17.373 00:42:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:17.373 00:42:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.373 00:42:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:17.373 00:42:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.373 00:42:09 -- common/autotest_common.sh@1217 -- # return 0 00:08:17.373 00:42:09 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.373 00:42:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.373 00:42:09 -- common/autotest_common.sh@10 -- # set +x 00:08:17.373 00:42:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.373 00:42:09 -- target/rpc.sh@81 -- # seq 1 5 00:08:17.373 00:42:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:17.373 00:42:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:17.373 00:42:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.373 00:42:09 -- common/autotest_common.sh@10 -- # set +x 00:08:17.373 00:42:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.373 00:42:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.373 00:42:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.373 00:42:09 -- common/autotest_common.sh@10 -- # set +x 00:08:17.373 [2024-04-27 00:42:09.931153] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.373 00:42:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.373 00:42:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:17.373 00:42:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.373 00:42:09 -- common/autotest_common.sh@10 -- # set +x 00:08:17.373 00:42:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.373 00:42:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:17.373 00:42:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.373 00:42:09 -- common/autotest_common.sh@10 -- # set +x 00:08:17.373 00:42:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.373 00:42:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.749 00:42:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:18.749 00:42:11 -- common/autotest_common.sh@1184 -- # local i=0 00:08:18.749 00:42:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:18.749 00:42:11 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:18.749 00:42:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:20.649 00:42:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:20.649 00:42:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:20.649 00:42:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:20.649 00:42:13 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:20.649 00:42:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:20.649 00:42:13 -- common/autotest_common.sh@1194 -- # return 0 00:08:20.649 00:42:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:20.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.649 00:42:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:20.649 00:42:13 -- common/autotest_common.sh@1205 -- # local i=0 00:08:20.649 00:42:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:20.649 00:42:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.649 00:42:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:20.649 00:42:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.649 00:42:13 -- common/autotest_common.sh@1217 -- # return 0 00:08:20.649 00:42:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.649 00:42:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.649 00:42:13 -- common/autotest_common.sh@10 -- # set +x 00:08:20.649 00:42:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.649 00:42:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.649 00:42:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.649 00:42:13 -- common/autotest_common.sh@10 -- # set +x 00:08:20.649 00:42:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.649 00:42:13 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:20.649 00:42:13 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:20.649 00:42:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.649 00:42:13 -- common/autotest_common.sh@10 -- # set +x 00:08:20.649 00:42:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.649 00:42:13 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.649 00:42:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.649 00:42:13 -- common/autotest_common.sh@10 -- # set +x 00:08:20.649 [2024-04-27 00:42:13.278555] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.649 00:42:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.649 00:42:13 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:20.649 00:42:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.649 00:42:13 -- common/autotest_common.sh@10 -- # set +x 00:08:20.649 00:42:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.649 00:42:13 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:20.649 00:42:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:20.649 00:42:13 -- common/autotest_common.sh@10 -- # set +x 00:08:20.649 00:42:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:20.649 00:42:13 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:22.023 00:42:14 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:22.023 00:42:14 -- common/autotest_common.sh@1184 -- # local i=0 00:08:22.023 00:42:14 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:22.023 00:42:14 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:22.023 00:42:14 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:23.922 00:42:16 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:23.922 00:42:16 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:23.922 00:42:16 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.922 00:42:16 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:23.922 00:42:16 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.922 00:42:16 -- common/autotest_common.sh@1194 -- # return 0 00:08:23.922 00:42:16 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.180 00:42:16 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:24.180 00:42:16 -- common/autotest_common.sh@1205 -- # local i=0 00:08:24.180 00:42:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:24.180 00:42:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.180 00:42:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:24.180 00:42:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:24.180 00:42:16 -- common/autotest_common.sh@1217 -- # return 0 00:08:24.180 00:42:16 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:24.180 00:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.180 00:42:16 -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 00:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.180 00:42:16 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.180 00:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.180 00:42:16 -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 00:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.180 00:42:16 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:24.180 00:42:16 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:24.180 00:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.180 00:42:16 -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 00:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.180 00:42:16 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.180 00:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.180 00:42:16 -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 [2024-04-27 00:42:16.679588] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.180 00:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.180 00:42:16 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:24.180 00:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.180 00:42:16 -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 00:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.180 00:42:16 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:24.180 00:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.180 00:42:16 -- common/autotest_common.sh@10 -- # set +x 00:08:24.180 00:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:24.180 00:42:16 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:25.555 00:42:17 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:25.556 00:42:17 -- common/autotest_common.sh@1184 -- # local i=0 00:08:25.556 00:42:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:25.556 00:42:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:25.556 00:42:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:27.457 00:42:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:27.457 00:42:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:27.457 00:42:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:27.457 00:42:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:27.457 00:42:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:27.457 00:42:19 -- common/autotest_common.sh@1194 -- # return 0 00:08:27.457 00:42:19 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:27.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.457 00:42:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:27.457 00:42:20 -- common/autotest_common.sh@1205 -- # local i=0 00:08:27.457 00:42:20 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:27.457 00:42:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.457 00:42:20 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:27.457 00:42:20 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.457 00:42:20 -- common/autotest_common.sh@1217 -- # return 0 00:08:27.457 00:42:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.457 00:42:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.457 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:27.457 00:42:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.457 00:42:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.457 00:42:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.458 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 00:42:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.458 00:42:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:27.458 00:42:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:27.458 00:42:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.458 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 00:42:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.458 00:42:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.458 00:42:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.458 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 [2024-04-27 00:42:20.075971] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.458 00:42:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.458 00:42:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:27.458 00:42:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.458 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 00:42:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.458 00:42:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:27.458 00:42:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:27.458 00:42:20 -- common/autotest_common.sh@10 -- # set +x 00:08:27.458 00:42:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:27.458 00:42:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:28.832 00:42:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.832 00:42:21 -- common/autotest_common.sh@1184 -- # local i=0 00:08:28.832 00:42:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.832 00:42:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:28.832 00:42:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:30.735 00:42:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:30.735 00:42:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:30.735 00:42:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:30.736 00:42:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:30.736 00:42:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:30.736 00:42:23 -- common/autotest_common.sh@1194 -- # return 0 00:08:30.736 00:42:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.736 00:42:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.736 00:42:23 -- common/autotest_common.sh@1205 -- # local i=0 00:08:30.736 00:42:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:30.736 00:42:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.736 00:42:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:30.736 00:42:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.736 00:42:23 -- common/autotest_common.sh@1217 -- # return 0 00:08:30.736 00:42:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.736 00:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.736 00:42:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.736 00:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.736 00:42:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.736 00:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.736 00:42:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.736 00:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.736 00:42:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:30.736 00:42:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.736 00:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.736 00:42:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.736 00:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.736 00:42:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.736 00:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.736 00:42:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.736 [2024-04-27 00:42:23.389532] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.736 00:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.736 00:42:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:30.736 00:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.736 00:42:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.736 00:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.736 00:42:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.736 00:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:30.736 00:42:23 -- common/autotest_common.sh@10 -- # set +x 00:08:30.736 00:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:30.736 00:42:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.108 00:42:24 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.108 00:42:24 -- common/autotest_common.sh@1184 -- # local i=0 00:08:32.108 00:42:24 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.108 00:42:24 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:08:32.108 00:42:24 -- common/autotest_common.sh@1191 -- # sleep 2 00:08:34.006 00:42:26 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:08:34.006 00:42:26 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:08:34.006 00:42:26 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.006 00:42:26 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:08:34.006 00:42:26 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.006 00:42:26 -- common/autotest_common.sh@1194 -- # return 0 00:08:34.006 00:42:26 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:34.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.264 00:42:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:34.264 00:42:26 -- common/autotest_common.sh@1205 -- # local i=0 00:08:34.264 00:42:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:08:34.264 00:42:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.264 00:42:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:08:34.264 00:42:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:34.264 00:42:26 -- common/autotest_common.sh@1217 -- # return 0 00:08:34.264 00:42:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:34.264 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.264 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.264 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.264 00:42:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.264 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.264 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.264 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.264 00:42:26 -- target/rpc.sh@99 -- # seq 1 5 00:08:34.264 00:42:26 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:34.264 00:42:26 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.264 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.264 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.264 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.264 00:42:26 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.264 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.264 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.264 [2024-04-27 00:42:26.797058] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.264 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.264 00:42:26 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.264 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.264 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.264 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.264 00:42:26 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.264 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.264 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.264 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.264 00:42:26 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.264 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.264 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.264 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.264 00:42:26 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.264 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.264 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.264 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.264 00:42:26 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:34.264 00:42:26 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.264 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.264 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.264 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.264 00:42:26 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.264 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.264 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.264 [2024-04-27 00:42:26.845174] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.264 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.264 00:42:26 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:34.265 00:42:26 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 [2024-04-27 00:42:26.893313] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:34.265 00:42:26 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 [2024-04-27 00:42:26.945497] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.265 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.265 00:42:26 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.265 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.265 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.524 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.524 00:42:26 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.524 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.524 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.524 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.524 00:42:26 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.524 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.524 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.524 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.524 00:42:26 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:34.524 00:42:26 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:34.524 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.524 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.524 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.524 00:42:26 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.524 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.524 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.524 [2024-04-27 00:42:26.993647] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.524 00:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.524 00:42:26 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.524 00:42:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.524 00:42:26 -- common/autotest_common.sh@10 -- # set +x 00:08:34.524 00:42:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.524 00:42:27 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:34.524 00:42:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.524 00:42:27 -- common/autotest_common.sh@10 -- # set +x 00:08:34.524 00:42:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.524 00:42:27 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.524 00:42:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.524 00:42:27 -- common/autotest_common.sh@10 -- # set +x 00:08:34.524 00:42:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.524 00:42:27 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:34.524 00:42:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.524 00:42:27 -- common/autotest_common.sh@10 -- # set +x 00:08:34.524 00:42:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.524 00:42:27 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:34.524 00:42:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:34.524 00:42:27 -- common/autotest_common.sh@10 -- # set +x 00:08:34.524 00:42:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:34.524 00:42:27 -- target/rpc.sh@110 -- # stats='{ 00:08:34.524 "tick_rate": 2300000000, 00:08:34.524 "poll_groups": [ 00:08:34.524 { 00:08:34.524 "name": "nvmf_tgt_poll_group_0", 00:08:34.524 "admin_qpairs": 2, 00:08:34.524 "io_qpairs": 168, 00:08:34.524 "current_admin_qpairs": 0, 00:08:34.524 "current_io_qpairs": 0, 00:08:34.524 "pending_bdev_io": 0, 00:08:34.524 "completed_nvme_io": 317, 00:08:34.524 "transports": [ 00:08:34.524 { 00:08:34.524 "trtype": "TCP" 00:08:34.524 } 00:08:34.524 ] 00:08:34.524 }, 00:08:34.524 { 00:08:34.524 "name": "nvmf_tgt_poll_group_1", 00:08:34.524 "admin_qpairs": 2, 00:08:34.524 "io_qpairs": 168, 00:08:34.524 "current_admin_qpairs": 0, 00:08:34.524 "current_io_qpairs": 0, 00:08:34.524 "pending_bdev_io": 0, 00:08:34.524 "completed_nvme_io": 267, 00:08:34.524 "transports": [ 00:08:34.524 { 00:08:34.524 "trtype": "TCP" 00:08:34.524 } 00:08:34.524 ] 00:08:34.524 }, 00:08:34.524 { 00:08:34.524 "name": "nvmf_tgt_poll_group_2", 00:08:34.524 "admin_qpairs": 1, 00:08:34.524 "io_qpairs": 168, 00:08:34.524 "current_admin_qpairs": 0, 00:08:34.524 "current_io_qpairs": 0, 00:08:34.524 "pending_bdev_io": 0, 00:08:34.524 "completed_nvme_io": 170, 00:08:34.524 "transports": [ 00:08:34.524 { 00:08:34.524 "trtype": "TCP" 00:08:34.524 } 00:08:34.524 ] 00:08:34.524 }, 00:08:34.524 { 00:08:34.524 "name": "nvmf_tgt_poll_group_3", 00:08:34.524 "admin_qpairs": 2, 00:08:34.524 "io_qpairs": 168, 00:08:34.524 "current_admin_qpairs": 0, 00:08:34.524 "current_io_qpairs": 0, 00:08:34.524 "pending_bdev_io": 0, 00:08:34.524 "completed_nvme_io": 268, 00:08:34.524 "transports": [ 00:08:34.524 { 00:08:34.524 "trtype": "TCP" 00:08:34.524 } 00:08:34.524 ] 00:08:34.524 } 00:08:34.524 ] 00:08:34.524 }' 00:08:34.524 00:42:27 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:34.524 00:42:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:34.524 00:42:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:34.524 00:42:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:34.524 00:42:27 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:34.524 00:42:27 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:34.524 00:42:27 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:34.524 00:42:27 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:34.524 00:42:27 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:34.524 00:42:27 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:34.524 00:42:27 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:34.524 00:42:27 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:34.524 00:42:27 -- target/rpc.sh@123 -- # nvmftestfini 00:08:34.524 00:42:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:34.524 00:42:27 -- nvmf/common.sh@117 -- # sync 00:08:34.524 00:42:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.524 00:42:27 -- nvmf/common.sh@120 -- # set +e 00:08:34.524 00:42:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.524 00:42:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.524 rmmod nvme_tcp 00:08:34.524 rmmod nvme_fabrics 00:08:34.524 rmmod nvme_keyring 00:08:34.524 00:42:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.524 00:42:27 -- nvmf/common.sh@124 -- # set -e 00:08:34.524 00:42:27 -- nvmf/common.sh@125 -- # return 0 00:08:34.524 00:42:27 -- nvmf/common.sh@478 -- # '[' -n 1564399 ']' 00:08:34.524 00:42:27 -- nvmf/common.sh@479 -- # killprocess 1564399 00:08:34.524 00:42:27 -- common/autotest_common.sh@936 -- # '[' -z 1564399 ']' 00:08:34.524 00:42:27 -- common/autotest_common.sh@940 -- # kill -0 1564399 00:08:34.524 00:42:27 -- common/autotest_common.sh@941 -- # uname 00:08:34.783 00:42:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:34.783 00:42:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1564399 00:08:34.783 00:42:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:34.783 00:42:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:34.783 00:42:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1564399' 00:08:34.784 killing process with pid 1564399 00:08:34.784 00:42:27 -- common/autotest_common.sh@955 -- # kill 1564399 00:08:34.784 00:42:27 -- common/autotest_common.sh@960 -- # wait 1564399 00:08:35.042 00:42:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:35.042 00:42:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:35.042 00:42:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:35.042 00:42:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.042 00:42:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.042 00:42:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.042 00:42:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.042 00:42:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.948 00:42:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.948 00:08:36.948 real 0m33.185s 00:08:36.948 user 1m42.247s 00:08:36.948 sys 0m5.981s 00:08:36.948 00:42:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:36.948 00:42:29 -- common/autotest_common.sh@10 -- # set +x 00:08:36.948 ************************************ 00:08:36.948 END TEST nvmf_rpc 00:08:36.948 ************************************ 00:08:36.948 00:42:29 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:36.948 00:42:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:36.948 00:42:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.948 00:42:29 -- common/autotest_common.sh@10 -- # set +x 00:08:37.207 ************************************ 00:08:37.207 START TEST nvmf_invalid 00:08:37.207 ************************************ 00:08:37.207 00:42:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:37.207 * Looking for test storage... 00:08:37.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.207 00:42:29 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.207 00:42:29 -- nvmf/common.sh@7 -- # uname -s 00:08:37.207 00:42:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.207 00:42:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.207 00:42:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.207 00:42:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.207 00:42:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.207 00:42:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.207 00:42:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.207 00:42:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.207 00:42:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.207 00:42:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.207 00:42:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:37.207 00:42:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:37.207 00:42:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.207 00:42:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.207 00:42:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.207 00:42:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.207 00:42:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.207 00:42:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.207 00:42:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.207 00:42:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.208 00:42:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.208 00:42:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.208 00:42:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.208 00:42:29 -- paths/export.sh@5 -- # export PATH 00:08:37.208 00:42:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.208 00:42:29 -- nvmf/common.sh@47 -- # : 0 00:08:37.208 00:42:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.208 00:42:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.208 00:42:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.208 00:42:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.208 00:42:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.208 00:42:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.208 00:42:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.208 00:42:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.208 00:42:29 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:37.208 00:42:29 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.208 00:42:29 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:37.208 00:42:29 -- target/invalid.sh@14 -- # target=foobar 00:08:37.208 00:42:29 -- target/invalid.sh@16 -- # RANDOM=0 00:08:37.208 00:42:29 -- target/invalid.sh@34 -- # nvmftestinit 00:08:37.208 00:42:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:37.208 00:42:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.208 00:42:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:37.208 00:42:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:37.208 00:42:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:37.208 00:42:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.208 00:42:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.208 00:42:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.208 00:42:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:37.208 00:42:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:37.208 00:42:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:37.208 00:42:29 -- common/autotest_common.sh@10 -- # set +x 00:08:42.477 00:42:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:42.477 00:42:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.477 00:42:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.477 00:42:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.477 00:42:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.477 00:42:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.477 00:42:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.477 00:42:34 -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.477 00:42:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.477 00:42:34 -- nvmf/common.sh@296 -- # e810=() 00:08:42.477 00:42:34 -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.477 00:42:34 -- nvmf/common.sh@297 -- # x722=() 00:08:42.477 00:42:34 -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.477 00:42:34 -- nvmf/common.sh@298 -- # mlx=() 00:08:42.477 00:42:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.477 00:42:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.477 00:42:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.477 00:42:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.477 00:42:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.477 00:42:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.477 00:42:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.477 00:42:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.477 00:42:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.477 00:42:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:42.477 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:42.477 00:42:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.477 00:42:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.477 00:42:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.477 00:42:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.477 00:42:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.477 00:42:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.477 00:42:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:42.477 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:42.477 00:42:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.478 00:42:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.478 00:42:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.478 00:42:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.478 00:42:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.478 00:42:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.478 00:42:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.478 00:42:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.478 00:42:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.478 00:42:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.478 00:42:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:42.478 00:42:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.478 00:42:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:42.478 Found net devices under 0000:86:00.0: cvl_0_0 00:08:42.478 00:42:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.478 00:42:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.478 00:42:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.478 00:42:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:42.478 00:42:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.478 00:42:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:42.478 Found net devices under 0000:86:00.1: cvl_0_1 00:08:42.478 00:42:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.478 00:42:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:42.478 00:42:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:42.478 00:42:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:42.478 00:42:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:42.478 00:42:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:42.478 00:42:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.478 00:42:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.478 00:42:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.478 00:42:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.478 00:42:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.478 00:42:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.478 00:42:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.478 00:42:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.478 00:42:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.478 00:42:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.478 00:42:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.478 00:42:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.478 00:42:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.478 00:42:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.478 00:42:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.478 00:42:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.478 00:42:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.478 00:42:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.478 00:42:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.478 00:42:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:08:42.478 00:08:42.478 --- 10.0.0.2 ping statistics --- 00:08:42.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.478 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:08:42.478 00:42:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:08:42.478 00:08:42.478 --- 10.0.0.1 ping statistics --- 00:08:42.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.478 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:08:42.478 00:42:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.478 00:42:35 -- nvmf/common.sh@411 -- # return 0 00:08:42.478 00:42:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:42.478 00:42:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.478 00:42:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:42.478 00:42:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:42.478 00:42:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.478 00:42:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:42.478 00:42:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:42.737 00:42:35 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:42.737 00:42:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:42.737 00:42:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:42.737 00:42:35 -- common/autotest_common.sh@10 -- # set +x 00:08:42.737 00:42:35 -- nvmf/common.sh@470 -- # nvmfpid=1572631 00:08:42.737 00:42:35 -- nvmf/common.sh@471 -- # waitforlisten 1572631 00:08:42.737 00:42:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.737 00:42:35 -- common/autotest_common.sh@817 -- # '[' -z 1572631 ']' 00:08:42.737 00:42:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.737 00:42:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:42.737 00:42:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.737 00:42:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:42.737 00:42:35 -- common/autotest_common.sh@10 -- # set +x 00:08:42.737 [2024-04-27 00:42:35.248659] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:08:42.737 [2024-04-27 00:42:35.248703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.737 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.737 [2024-04-27 00:42:35.308017] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.737 [2024-04-27 00:42:35.384121] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.737 [2024-04-27 00:42:35.384164] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.737 [2024-04-27 00:42:35.384170] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.737 [2024-04-27 00:42:35.384176] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.737 [2024-04-27 00:42:35.384181] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.737 [2024-04-27 00:42:35.384232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.737 [2024-04-27 00:42:35.384320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.737 [2024-04-27 00:42:35.384405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.737 [2024-04-27 00:42:35.384407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.673 00:42:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:43.673 00:42:36 -- common/autotest_common.sh@850 -- # return 0 00:08:43.673 00:42:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:43.673 00:42:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:43.673 00:42:36 -- common/autotest_common.sh@10 -- # set +x 00:08:43.673 00:42:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.673 00:42:36 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:43.673 00:42:36 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22205 00:08:43.673 [2024-04-27 00:42:36.241271] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:43.673 00:42:36 -- target/invalid.sh@40 -- # out='request: 00:08:43.673 { 00:08:43.673 "nqn": "nqn.2016-06.io.spdk:cnode22205", 00:08:43.673 "tgt_name": "foobar", 00:08:43.673 "method": "nvmf_create_subsystem", 00:08:43.673 "req_id": 1 00:08:43.673 } 00:08:43.673 Got JSON-RPC error response 00:08:43.673 response: 00:08:43.673 { 00:08:43.673 "code": -32603, 00:08:43.673 "message": "Unable to find target foobar" 00:08:43.673 }' 00:08:43.673 00:42:36 -- target/invalid.sh@41 -- # [[ request: 00:08:43.673 { 00:08:43.673 "nqn": "nqn.2016-06.io.spdk:cnode22205", 00:08:43.673 "tgt_name": "foobar", 00:08:43.673 "method": "nvmf_create_subsystem", 00:08:43.673 "req_id": 1 00:08:43.673 } 00:08:43.673 Got JSON-RPC error response 00:08:43.673 response: 00:08:43.673 { 00:08:43.673 "code": -32603, 00:08:43.673 "message": "Unable to find target foobar" 00:08:43.673 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:43.673 00:42:36 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:43.673 00:42:36 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1493 00:08:43.931 [2024-04-27 00:42:36.442009] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1493: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:43.931 00:42:36 -- target/invalid.sh@45 -- # out='request: 00:08:43.931 { 00:08:43.931 "nqn": "nqn.2016-06.io.spdk:cnode1493", 00:08:43.931 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:43.931 "method": "nvmf_create_subsystem", 00:08:43.931 "req_id": 1 00:08:43.931 } 00:08:43.931 Got JSON-RPC error response 00:08:43.931 response: 00:08:43.931 { 00:08:43.931 "code": -32602, 00:08:43.931 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:43.931 }' 00:08:43.931 00:42:36 -- target/invalid.sh@46 -- # [[ request: 00:08:43.931 { 00:08:43.931 "nqn": "nqn.2016-06.io.spdk:cnode1493", 00:08:43.931 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:43.931 "method": "nvmf_create_subsystem", 00:08:43.931 "req_id": 1 00:08:43.931 } 00:08:43.931 Got JSON-RPC error response 00:08:43.931 response: 00:08:43.931 { 00:08:43.931 "code": -32602, 00:08:43.931 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:43.931 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:43.931 00:42:36 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:43.931 00:42:36 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26510 00:08:44.188 [2024-04-27 00:42:36.634590] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26510: invalid model number 'SPDK_Controller' 00:08:44.188 00:42:36 -- target/invalid.sh@50 -- # out='request: 00:08:44.188 { 00:08:44.188 "nqn": "nqn.2016-06.io.spdk:cnode26510", 00:08:44.188 "model_number": "SPDK_Controller\u001f", 00:08:44.188 "method": "nvmf_create_subsystem", 00:08:44.188 "req_id": 1 00:08:44.188 } 00:08:44.188 Got JSON-RPC error response 00:08:44.188 response: 00:08:44.188 { 00:08:44.188 "code": -32602, 00:08:44.188 "message": "Invalid MN SPDK_Controller\u001f" 00:08:44.188 }' 00:08:44.188 00:42:36 -- target/invalid.sh@51 -- # [[ request: 00:08:44.188 { 00:08:44.188 "nqn": "nqn.2016-06.io.spdk:cnode26510", 00:08:44.188 "model_number": "SPDK_Controller\u001f", 00:08:44.188 "method": "nvmf_create_subsystem", 00:08:44.188 "req_id": 1 00:08:44.188 } 00:08:44.189 Got JSON-RPC error response 00:08:44.189 response: 00:08:44.189 { 00:08:44.189 "code": -32602, 00:08:44.189 "message": "Invalid MN SPDK_Controller\u001f" 00:08:44.189 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:44.189 00:42:36 -- target/invalid.sh@54 -- # gen_random_s 21 00:08:44.189 00:42:36 -- target/invalid.sh@19 -- # local length=21 ll 00:08:44.189 00:42:36 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:44.189 00:42:36 -- target/invalid.sh@21 -- # local chars 00:08:44.189 00:42:36 -- target/invalid.sh@22 -- # local string 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 117 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=u 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 49 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=1 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 37 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=% 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 89 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=Y 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 40 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+='(' 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 39 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x27' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=\' 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 72 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=H 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 91 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+='[' 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 109 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=m 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 68 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=D 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 75 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=K 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 106 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=j 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 124 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+='|' 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 33 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+='!' 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 43 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=+ 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 71 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x47' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=G 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 104 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=h 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 70 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=F 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 98 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=b 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 54 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=6 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # printf %x 113 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:44.189 00:42:36 -- target/invalid.sh@25 -- # string+=q 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.189 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.189 00:42:36 -- target/invalid.sh@28 -- # [[ u == \- ]] 00:08:44.189 00:42:36 -- target/invalid.sh@31 -- # echo 'u1%Y('\''H[mDKj|!+GhFb6q' 00:08:44.189 00:42:36 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'u1%Y('\''H[mDKj|!+GhFb6q' nqn.2016-06.io.spdk:cnode27274 00:08:44.448 [2024-04-27 00:42:36.951633] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27274: invalid serial number 'u1%Y('H[mDKj|!+GhFb6q' 00:08:44.448 00:42:36 -- target/invalid.sh@54 -- # out='request: 00:08:44.448 { 00:08:44.448 "nqn": "nqn.2016-06.io.spdk:cnode27274", 00:08:44.448 "serial_number": "u1%Y('\''H[mDKj|!+GhFb6q", 00:08:44.448 "method": "nvmf_create_subsystem", 00:08:44.448 "req_id": 1 00:08:44.448 } 00:08:44.448 Got JSON-RPC error response 00:08:44.448 response: 00:08:44.448 { 00:08:44.448 "code": -32602, 00:08:44.448 "message": "Invalid SN u1%Y('\''H[mDKj|!+GhFb6q" 00:08:44.448 }' 00:08:44.448 00:42:36 -- target/invalid.sh@55 -- # [[ request: 00:08:44.448 { 00:08:44.448 "nqn": "nqn.2016-06.io.spdk:cnode27274", 00:08:44.448 "serial_number": "u1%Y('H[mDKj|!+GhFb6q", 00:08:44.448 "method": "nvmf_create_subsystem", 00:08:44.448 "req_id": 1 00:08:44.448 } 00:08:44.448 Got JSON-RPC error response 00:08:44.448 response: 00:08:44.448 { 00:08:44.448 "code": -32602, 00:08:44.448 "message": "Invalid SN u1%Y('H[mDKj|!+GhFb6q" 00:08:44.448 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:44.448 00:42:36 -- target/invalid.sh@58 -- # gen_random_s 41 00:08:44.448 00:42:36 -- target/invalid.sh@19 -- # local length=41 ll 00:08:44.448 00:42:36 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:44.448 00:42:36 -- target/invalid.sh@21 -- # local chars 00:08:44.448 00:42:36 -- target/invalid.sh@22 -- # local string 00:08:44.448 00:42:36 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:44.448 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:36 -- target/invalid.sh@25 -- # printf %x 46 00:08:44.448 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:44.448 00:42:36 -- target/invalid.sh@25 -- # string+=. 00:08:44.448 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.448 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:36 -- target/invalid.sh@25 -- # printf %x 35 00:08:44.448 00:42:36 -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:44.448 00:42:36 -- target/invalid.sh@25 -- # string+='#' 00:08:44.448 00:42:36 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.448 00:42:36 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # printf %x 126 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # string+='~' 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # printf %x 56 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # string+=8 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # printf %x 66 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # string+=B 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # printf %x 122 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # string+=z 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # printf %x 113 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # string+=q 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # printf %x 96 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # string+='`' 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # printf %x 93 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # string+=']' 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # printf %x 114 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x72' 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # string+=r 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # printf %x 66 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:44.448 00:42:37 -- target/invalid.sh@25 -- # string+=B 00:08:44.448 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 32 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+=' ' 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 37 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+=% 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 46 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+=. 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 84 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x54' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+=T 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 100 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+=d 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 58 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+=: 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 99 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+=c 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 117 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x75' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+=u 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 89 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+=Y 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 43 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+=+ 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 42 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+='*' 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 63 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+='?' 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 91 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+='[' 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # printf %x 40 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:44.449 00:42:37 -- target/invalid.sh@25 -- # string+='(' 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.449 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 66 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+=B 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 47 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+=/ 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 73 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x49' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+=I 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 88 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+=X 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 89 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x59' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+=Y 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 34 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+='"' 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 96 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+='`' 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 125 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+='}' 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 48 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+=0 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 44 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+=, 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # printf %x 58 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:44.707 00:42:37 -- target/invalid.sh@25 -- # string+=: 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.707 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # printf %x 57 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x39' 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # string+=9 00:08:44.708 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.708 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # printf %x 99 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # string+=c 00:08:44.708 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.708 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # printf %x 64 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # string+=@ 00:08:44.708 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.708 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # printf %x 60 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # string+='<' 00:08:44.708 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.708 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # printf %x 47 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:44.708 00:42:37 -- target/invalid.sh@25 -- # string+=/ 00:08:44.708 00:42:37 -- target/invalid.sh@24 -- # (( ll++ )) 00:08:44.708 00:42:37 -- target/invalid.sh@24 -- # (( ll < length )) 00:08:44.708 00:42:37 -- target/invalid.sh@28 -- # [[ . == \- ]] 00:08:44.708 00:42:37 -- target/invalid.sh@31 -- # echo '.#~8Bzq`]rB %.Td:cuY+*?[(B/IXY"`}0,:9c@ /dev/null' 00:08:46.820 00:42:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.723 00:42:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.723 00:08:48.723 real 0m11.696s 00:08:48.723 user 0m19.411s 00:08:48.723 sys 0m4.993s 00:08:48.723 00:42:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.723 00:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:48.723 ************************************ 00:08:48.723 END TEST nvmf_invalid 00:08:48.723 ************************************ 00:08:48.983 00:42:41 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:48.983 00:42:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:48.983 00:42:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.983 00:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:48.983 ************************************ 00:08:48.983 START TEST nvmf_abort 00:08:48.983 ************************************ 00:08:48.983 00:42:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:48.983 * Looking for test storage... 00:08:48.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.983 00:42:41 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.983 00:42:41 -- nvmf/common.sh@7 -- # uname -s 00:08:48.983 00:42:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.983 00:42:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.983 00:42:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.983 00:42:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.983 00:42:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.983 00:42:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.983 00:42:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.983 00:42:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.983 00:42:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.983 00:42:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.242 00:42:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:49.242 00:42:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:49.242 00:42:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.242 00:42:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.242 00:42:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.242 00:42:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.242 00:42:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.242 00:42:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.242 00:42:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.242 00:42:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.242 00:42:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.242 00:42:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.242 00:42:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.242 00:42:41 -- paths/export.sh@5 -- # export PATH 00:08:49.242 00:42:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.242 00:42:41 -- nvmf/common.sh@47 -- # : 0 00:08:49.242 00:42:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:49.242 00:42:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:49.242 00:42:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.242 00:42:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.242 00:42:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.242 00:42:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:49.242 00:42:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:49.242 00:42:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:49.242 00:42:41 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.242 00:42:41 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:49.242 00:42:41 -- target/abort.sh@14 -- # nvmftestinit 00:08:49.242 00:42:41 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:49.242 00:42:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.242 00:42:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:49.242 00:42:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:49.242 00:42:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:49.242 00:42:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.242 00:42:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.242 00:42:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.242 00:42:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:49.243 00:42:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:49.243 00:42:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:49.243 00:42:41 -- common/autotest_common.sh@10 -- # set +x 00:08:54.507 00:42:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:54.507 00:42:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:54.507 00:42:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:54.507 00:42:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:54.507 00:42:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:54.507 00:42:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:54.507 00:42:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:54.507 00:42:46 -- nvmf/common.sh@295 -- # net_devs=() 00:08:54.507 00:42:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:54.507 00:42:46 -- nvmf/common.sh@296 -- # e810=() 00:08:54.507 00:42:46 -- nvmf/common.sh@296 -- # local -ga e810 00:08:54.507 00:42:46 -- nvmf/common.sh@297 -- # x722=() 00:08:54.507 00:42:46 -- nvmf/common.sh@297 -- # local -ga x722 00:08:54.507 00:42:46 -- nvmf/common.sh@298 -- # mlx=() 00:08:54.507 00:42:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:54.507 00:42:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.507 00:42:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:54.507 00:42:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:54.507 00:42:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:54.507 00:42:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.507 00:42:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:54.507 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:54.507 00:42:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:54.507 00:42:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:54.507 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:54.507 00:42:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:54.507 00:42:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.507 00:42:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.507 00:42:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:54.507 00:42:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.507 00:42:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:54.507 Found net devices under 0000:86:00.0: cvl_0_0 00:08:54.507 00:42:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.507 00:42:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:54.507 00:42:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.507 00:42:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:54.507 00:42:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.507 00:42:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:54.507 Found net devices under 0000:86:00.1: cvl_0_1 00:08:54.507 00:42:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.507 00:42:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:54.507 00:42:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:54.507 00:42:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:54.507 00:42:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:54.507 00:42:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.507 00:42:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.507 00:42:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.507 00:42:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:54.507 00:42:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.507 00:42:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.507 00:42:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:54.507 00:42:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.507 00:42:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.508 00:42:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:54.508 00:42:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:54.508 00:42:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.508 00:42:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.508 00:42:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.508 00:42:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.508 00:42:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:54.508 00:42:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.508 00:42:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.508 00:42:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.508 00:42:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:54.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:08:54.508 00:08:54.508 --- 10.0.0.2 ping statistics --- 00:08:54.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.508 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:08:54.508 00:42:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.464 ms 00:08:54.508 00:08:54.508 --- 10.0.0.1 ping statistics --- 00:08:54.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.508 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:08:54.508 00:42:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.508 00:42:47 -- nvmf/common.sh@411 -- # return 0 00:08:54.508 00:42:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:54.508 00:42:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.508 00:42:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:54.508 00:42:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:54.508 00:42:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.508 00:42:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:54.508 00:42:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:54.508 00:42:47 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:54.508 00:42:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:54.508 00:42:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:54.508 00:42:47 -- common/autotest_common.sh@10 -- # set +x 00:08:54.508 00:42:47 -- nvmf/common.sh@470 -- # nvmfpid=1576807 00:08:54.508 00:42:47 -- nvmf/common.sh@471 -- # waitforlisten 1576807 00:08:54.508 00:42:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:54.508 00:42:47 -- common/autotest_common.sh@817 -- # '[' -z 1576807 ']' 00:08:54.508 00:42:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.508 00:42:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:54.508 00:42:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.508 00:42:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:54.508 00:42:47 -- common/autotest_common.sh@10 -- # set +x 00:08:54.508 [2024-04-27 00:42:47.087925] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:08:54.508 [2024-04-27 00:42:47.087971] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.508 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.508 [2024-04-27 00:42:47.145159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.765 [2024-04-27 00:42:47.225282] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.765 [2024-04-27 00:42:47.225314] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.765 [2024-04-27 00:42:47.225321] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.765 [2024-04-27 00:42:47.225328] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.765 [2024-04-27 00:42:47.225333] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.765 [2024-04-27 00:42:47.225370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.765 [2024-04-27 00:42:47.225454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.766 [2024-04-27 00:42:47.225456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.330 00:42:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:55.330 00:42:47 -- common/autotest_common.sh@850 -- # return 0 00:08:55.330 00:42:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:55.330 00:42:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:55.330 00:42:47 -- common/autotest_common.sh@10 -- # set +x 00:08:55.330 00:42:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.330 00:42:47 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:55.330 00:42:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.330 00:42:47 -- common/autotest_common.sh@10 -- # set +x 00:08:55.330 [2024-04-27 00:42:47.938118] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.330 00:42:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.330 00:42:47 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:55.330 00:42:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.331 00:42:47 -- common/autotest_common.sh@10 -- # set +x 00:08:55.331 Malloc0 00:08:55.331 00:42:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.331 00:42:47 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:55.331 00:42:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.331 00:42:47 -- common/autotest_common.sh@10 -- # set +x 00:08:55.331 Delay0 00:08:55.331 00:42:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.331 00:42:47 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.331 00:42:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.331 00:42:47 -- common/autotest_common.sh@10 -- # set +x 00:08:55.331 00:42:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.331 00:42:48 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:55.331 00:42:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.331 00:42:48 -- common/autotest_common.sh@10 -- # set +x 00:08:55.331 00:42:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.331 00:42:48 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:55.331 00:42:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.331 00:42:48 -- common/autotest_common.sh@10 -- # set +x 00:08:55.331 [2024-04-27 00:42:48.014338] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.331 00:42:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.331 00:42:48 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:55.331 00:42:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:55.331 00:42:48 -- common/autotest_common.sh@10 -- # set +x 00:08:55.588 00:42:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:55.588 00:42:48 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:55.588 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.588 [2024-04-27 00:42:48.081622] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:58.116 Initializing NVMe Controllers 00:08:58.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:58.116 controller IO queue size 128 less than required 00:08:58.116 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:58.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:58.116 Initialization complete. Launching workers. 00:08:58.116 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42312 00:08:58.116 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42373, failed to submit 62 00:08:58.116 success 42316, unsuccess 57, failed 0 00:08:58.116 00:42:50 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:58.116 00:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:58.116 00:42:50 -- common/autotest_common.sh@10 -- # set +x 00:08:58.116 00:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:58.116 00:42:50 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:58.116 00:42:50 -- target/abort.sh@38 -- # nvmftestfini 00:08:58.116 00:42:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:58.116 00:42:50 -- nvmf/common.sh@117 -- # sync 00:08:58.116 00:42:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.116 00:42:50 -- nvmf/common.sh@120 -- # set +e 00:08:58.116 00:42:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.116 00:42:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.116 rmmod nvme_tcp 00:08:58.116 rmmod nvme_fabrics 00:08:58.116 rmmod nvme_keyring 00:08:58.116 00:42:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.116 00:42:50 -- nvmf/common.sh@124 -- # set -e 00:08:58.116 00:42:50 -- nvmf/common.sh@125 -- # return 0 00:08:58.116 00:42:50 -- nvmf/common.sh@478 -- # '[' -n 1576807 ']' 00:08:58.116 00:42:50 -- nvmf/common.sh@479 -- # killprocess 1576807 00:08:58.116 00:42:50 -- common/autotest_common.sh@936 -- # '[' -z 1576807 ']' 00:08:58.116 00:42:50 -- common/autotest_common.sh@940 -- # kill -0 1576807 00:08:58.116 00:42:50 -- common/autotest_common.sh@941 -- # uname 00:08:58.116 00:42:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:58.116 00:42:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1576807 00:08:58.117 00:42:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:58.117 00:42:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:58.117 00:42:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1576807' 00:08:58.117 killing process with pid 1576807 00:08:58.117 00:42:50 -- common/autotest_common.sh@955 -- # kill 1576807 00:08:58.117 00:42:50 -- common/autotest_common.sh@960 -- # wait 1576807 00:08:58.117 00:42:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:58.117 00:42:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:58.117 00:42:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:58.117 00:42:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.117 00:42:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.117 00:42:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.117 00:42:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.117 00:42:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.020 00:42:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.020 00:09:00.020 real 0m11.085s 00:09:00.020 user 0m13.194s 00:09:00.020 sys 0m5.051s 00:09:00.020 00:42:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:00.020 00:42:52 -- common/autotest_common.sh@10 -- # set +x 00:09:00.020 ************************************ 00:09:00.020 END TEST nvmf_abort 00:09:00.020 ************************************ 00:09:00.020 00:42:52 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:00.020 00:42:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:00.020 00:42:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.020 00:42:52 -- common/autotest_common.sh@10 -- # set +x 00:09:00.279 ************************************ 00:09:00.279 START TEST nvmf_ns_hotplug_stress 00:09:00.279 ************************************ 00:09:00.279 00:42:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:00.279 * Looking for test storage... 00:09:00.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.279 00:42:52 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.279 00:42:52 -- nvmf/common.sh@7 -- # uname -s 00:09:00.279 00:42:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.279 00:42:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.279 00:42:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.279 00:42:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.279 00:42:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.279 00:42:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.279 00:42:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.279 00:42:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.279 00:42:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.279 00:42:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.279 00:42:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:00.279 00:42:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:00.279 00:42:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.279 00:42:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.279 00:42:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.279 00:42:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.279 00:42:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.279 00:42:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.279 00:42:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.279 00:42:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.279 00:42:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.279 00:42:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.279 00:42:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.279 00:42:52 -- paths/export.sh@5 -- # export PATH 00:09:00.279 00:42:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.279 00:42:52 -- nvmf/common.sh@47 -- # : 0 00:09:00.279 00:42:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.279 00:42:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.279 00:42:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.279 00:42:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.279 00:42:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.279 00:42:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.279 00:42:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.279 00:42:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.279 00:42:52 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.279 00:42:52 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:09:00.279 00:42:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:00.279 00:42:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.279 00:42:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:00.280 00:42:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:00.280 00:42:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:00.280 00:42:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.280 00:42:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.280 00:42:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.280 00:42:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:00.280 00:42:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:00.280 00:42:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.280 00:42:52 -- common/autotest_common.sh@10 -- # set +x 00:09:06.842 00:42:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:06.842 00:42:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.842 00:42:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.842 00:42:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.842 00:42:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.842 00:42:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.842 00:42:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.842 00:42:58 -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.842 00:42:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.842 00:42:58 -- nvmf/common.sh@296 -- # e810=() 00:09:06.842 00:42:58 -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.842 00:42:58 -- nvmf/common.sh@297 -- # x722=() 00:09:06.842 00:42:58 -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.842 00:42:58 -- nvmf/common.sh@298 -- # mlx=() 00:09:06.842 00:42:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.842 00:42:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.842 00:42:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.842 00:42:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:06.842 00:42:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.842 00:42:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.842 00:42:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:06.842 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:06.842 00:42:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.842 00:42:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:06.842 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:06.842 00:42:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.842 00:42:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.842 00:42:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.842 00:42:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:06.842 00:42:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.842 00:42:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:06.842 Found net devices under 0000:86:00.0: cvl_0_0 00:09:06.842 00:42:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.842 00:42:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.842 00:42:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.842 00:42:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:06.842 00:42:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.842 00:42:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:06.842 Found net devices under 0000:86:00.1: cvl_0_1 00:09:06.842 00:42:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.842 00:42:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:06.842 00:42:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:06.842 00:42:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:06.842 00:42:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.842 00:42:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.842 00:42:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.842 00:42:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:06.842 00:42:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.842 00:42:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.842 00:42:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:06.842 00:42:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.842 00:42:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.842 00:42:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:06.842 00:42:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:06.842 00:42:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.842 00:42:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.842 00:42:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.842 00:42:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.842 00:42:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:06.842 00:42:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.842 00:42:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.842 00:42:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.842 00:42:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:06.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:09:06.842 00:09:06.842 --- 10.0.0.2 ping statistics --- 00:09:06.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.842 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:09:06.842 00:42:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:09:06.842 00:09:06.842 --- 10.0.0.1 ping statistics --- 00:09:06.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.842 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:09:06.842 00:42:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.842 00:42:58 -- nvmf/common.sh@411 -- # return 0 00:09:06.842 00:42:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:06.842 00:42:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.842 00:42:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:06.842 00:42:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.842 00:42:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:06.842 00:42:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:06.842 00:42:58 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:09:06.842 00:42:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:06.842 00:42:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:06.842 00:42:58 -- common/autotest_common.sh@10 -- # set +x 00:09:06.842 00:42:58 -- nvmf/common.sh@470 -- # nvmfpid=1581038 00:09:06.842 00:42:58 -- nvmf/common.sh@471 -- # waitforlisten 1581038 00:09:06.842 00:42:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:06.842 00:42:58 -- common/autotest_common.sh@817 -- # '[' -z 1581038 ']' 00:09:06.842 00:42:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.842 00:42:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:06.842 00:42:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.842 00:42:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:06.842 00:42:58 -- common/autotest_common.sh@10 -- # set +x 00:09:06.842 [2024-04-27 00:42:58.745713] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:09:06.842 [2024-04-27 00:42:58.745753] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.842 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.842 [2024-04-27 00:42:58.803421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.842 [2024-04-27 00:42:58.873264] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.842 [2024-04-27 00:42:58.873305] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.842 [2024-04-27 00:42:58.873312] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.842 [2024-04-27 00:42:58.873318] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.842 [2024-04-27 00:42:58.873324] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.842 [2024-04-27 00:42:58.873362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.842 [2024-04-27 00:42:58.873448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.842 [2024-04-27 00:42:58.873449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.101 00:42:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:07.101 00:42:59 -- common/autotest_common.sh@850 -- # return 0 00:09:07.101 00:42:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:07.101 00:42:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:07.101 00:42:59 -- common/autotest_common.sh@10 -- # set +x 00:09:07.101 00:42:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.101 00:42:59 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:09:07.101 00:42:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:07.101 [2024-04-27 00:42:59.730906] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.101 00:42:59 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:07.359 00:42:59 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.618 [2024-04-27 00:43:00.112381] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.618 00:43:00 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:07.877 00:43:00 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:07.877 Malloc0 00:09:07.877 00:43:00 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:08.136 Delay0 00:09:08.136 00:43:00 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.394 00:43:00 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:08.394 NULL1 00:09:08.394 00:43:01 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:08.653 00:43:01 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:08.653 00:43:01 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=1581389 00:09:08.653 00:43:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:08.653 00:43:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.653 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.911 00:43:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.911 00:43:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:09:08.911 00:43:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:09.170 true 00:09:09.170 00:43:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:09.170 00:43:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.429 00:43:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.688 00:43:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:09:09.688 00:43:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:09.688 true 00:09:09.688 00:43:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:09.688 00:43:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.946 00:43:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.205 00:43:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:09:10.205 00:43:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:10.462 true 00:09:10.462 00:43:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:10.462 00:43:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.720 00:43:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.720 00:43:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:09:10.720 00:43:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:10.978 true 00:09:10.978 00:43:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:10.978 00:43:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.238 00:43:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.238 00:43:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:09:11.238 00:43:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:11.496 true 00:09:11.496 00:43:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:11.496 00:43:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.754 00:43:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.013 00:43:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:09:12.013 00:43:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:12.013 true 00:09:12.013 00:43:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:12.013 00:43:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.272 00:43:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.531 00:43:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:09:12.531 00:43:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:12.531 true 00:09:12.789 00:43:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:12.790 00:43:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.790 00:43:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.049 00:43:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:09:13.049 00:43:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:13.307 true 00:09:13.307 00:43:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:13.307 00:43:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.307 00:43:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.566 00:43:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:09:13.566 00:43:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:13.825 true 00:09:13.825 00:43:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:13.825 00:43:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.084 00:43:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.084 00:43:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:09:14.084 00:43:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:14.343 true 00:09:14.343 00:43:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:14.343 00:43:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.602 00:43:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.860 00:43:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:09:14.860 00:43:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:14.860 true 00:09:14.860 00:43:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:14.860 00:43:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.119 00:43:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.378 00:43:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:09:15.378 00:43:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:15.378 true 00:09:15.378 00:43:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:15.378 00:43:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.637 00:43:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.895 00:43:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:09:15.895 00:43:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:16.154 true 00:09:16.154 00:43:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:16.154 00:43:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.154 00:43:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.412 00:43:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:09:16.412 00:43:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:16.671 true 00:09:16.671 00:43:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:16.671 00:43:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.928 00:43:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.929 00:43:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:09:16.929 00:43:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:17.187 true 00:09:17.187 00:43:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:17.187 00:43:09 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.445 00:43:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.702 00:43:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:09:17.702 00:43:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:17.702 true 00:09:17.702 00:43:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:17.702 00:43:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.960 00:43:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.218 00:43:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:09:18.218 00:43:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:18.218 true 00:09:18.218 00:43:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:18.218 00:43:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.477 00:43:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.735 00:43:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:09:18.735 00:43:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:18.994 true 00:09:18.994 00:43:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:18.994 00:43:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.994 00:43:11 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.252 00:43:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:09:19.252 00:43:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:19.523 true 00:09:19.523 00:43:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:19.523 00:43:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.523 00:43:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.867 00:43:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:09:19.867 00:43:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:20.136 true 00:09:20.136 00:43:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:20.136 00:43:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.136 00:43:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.415 00:43:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:09:20.415 00:43:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:20.679 true 00:09:20.679 00:43:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:20.679 00:43:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.679 00:43:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.937 00:43:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:09:20.937 00:43:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:21.195 true 00:09:21.195 00:43:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:21.195 00:43:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.454 00:43:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.454 00:43:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:09:21.454 00:43:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:21.712 true 00:09:21.712 00:43:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:21.712 00:43:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.971 00:43:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.229 00:43:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:09:22.229 00:43:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:22.229 true 00:09:22.229 00:43:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:22.229 00:43:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.488 00:43:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.746 00:43:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:09:22.746 00:43:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:23.005 true 00:09:23.005 00:43:15 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:23.005 00:43:15 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.263 00:43:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.263 00:43:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:09:23.263 00:43:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:23.522 true 00:09:23.522 00:43:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:23.522 00:43:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.781 00:43:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.781 00:43:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:09:23.781 00:43:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:24.039 true 00:09:24.039 00:43:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:24.039 00:43:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.298 00:43:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.556 00:43:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:09:24.556 00:43:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:24.556 true 00:09:24.556 00:43:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:24.556 00:43:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.814 00:43:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.072 00:43:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:09:25.072 00:43:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:25.330 true 00:09:25.330 00:43:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:25.330 00:43:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.330 00:43:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.588 00:43:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:09:25.588 00:43:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:25.847 true 00:09:25.847 00:43:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:25.847 00:43:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.105 00:43:18 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.105 00:43:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:09:26.105 00:43:18 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:26.364 true 00:09:26.364 00:43:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:26.364 00:43:18 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.622 00:43:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.881 00:43:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:09:26.881 00:43:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:26.881 true 00:09:26.881 00:43:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:26.881 00:43:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.139 00:43:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.398 00:43:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:09:27.398 00:43:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:27.656 true 00:09:27.656 00:43:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:27.656 00:43:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.656 00:43:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.914 00:43:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:09:27.914 00:43:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:09:28.171 true 00:09:28.171 00:43:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:28.171 00:43:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.430 00:43:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.688 00:43:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:09:28.688 00:43:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:09:28.688 true 00:09:28.688 00:43:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:28.688 00:43:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.947 00:43:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.205 00:43:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:09:29.205 00:43:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:09:29.205 true 00:09:29.463 00:43:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:29.463 00:43:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.463 00:43:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.720 00:43:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:09:29.720 00:43:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:09:29.978 true 00:09:29.978 00:43:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:29.978 00:43:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.236 00:43:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.236 00:43:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:09:30.236 00:43:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:09:30.493 true 00:09:30.493 00:43:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:30.493 00:43:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.752 00:43:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.010 00:43:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:09:31.010 00:43:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:09:31.010 true 00:09:31.010 00:43:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:31.010 00:43:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.268 00:43:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.527 00:43:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:09:31.527 00:43:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:09:31.786 true 00:09:31.786 00:43:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:31.786 00:43:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.786 00:43:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.044 00:43:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:09:32.044 00:43:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:09:32.303 true 00:09:32.303 00:43:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:32.303 00:43:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.562 00:43:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.562 00:43:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:09:32.562 00:43:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:09:32.821 true 00:09:32.821 00:43:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:32.821 00:43:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.080 00:43:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.338 00:43:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:09:33.338 00:43:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:09:33.338 true 00:09:33.338 00:43:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:33.338 00:43:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.596 00:43:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.859 00:43:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:09:33.859 00:43:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:09:34.117 true 00:09:34.117 00:43:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:34.117 00:43:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.117 00:43:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.374 00:43:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:09:34.374 00:43:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:09:34.632 true 00:09:34.632 00:43:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:34.632 00:43:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.890 00:43:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.890 00:43:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:09:34.890 00:43:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:09:35.148 true 00:09:35.148 00:43:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:35.148 00:43:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.406 00:43:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.664 00:43:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:09:35.664 00:43:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:09:35.664 true 00:09:35.664 00:43:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:35.664 00:43:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.921 00:43:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.179 00:43:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:09:36.179 00:43:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:09:36.438 true 00:09:36.438 00:43:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:36.438 00:43:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.695 00:43:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.695 00:43:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:09:36.695 00:43:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:09:36.953 true 00:09:36.953 00:43:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:36.953 00:43:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.210 00:43:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.468 00:43:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:09:37.468 00:43:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:09:37.725 true 00:09:37.725 00:43:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:37.725 00:43:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.725 00:43:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.983 00:43:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:09:37.983 00:43:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:09:38.240 true 00:09:38.240 00:43:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:38.240 00:43:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.498 00:43:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.498 00:43:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:09:38.498 00:43:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:09:38.756 true 00:09:38.756 00:43:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:38.756 00:43:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.014 Initializing NVMe Controllers 00:09:39.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:39.014 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:09:39.014 Controller IO queue size 128, less than required. 00:09:39.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:39.014 WARNING: Some requested NVMe devices were skipped 00:09:39.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:39.014 Initialization complete. Launching workers. 00:09:39.014 ======================================================== 00:09:39.014 Latency(us) 00:09:39.014 Device Information : IOPS MiB/s Average min max 00:09:39.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 25594.80 12.50 5001.04 2006.84 9556.36 00:09:39.014 ======================================================== 00:09:39.014 Total : 25594.80 12.50 5001.04 2006.84 9556.36 00:09:39.014 00:09:39.014 00:43:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.272 00:43:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:09:39.272 00:43:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:09:39.272 true 00:09:39.272 00:43:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 1581389 00:09:39.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (1581389) - No such process 00:09:39.272 00:43:31 -- target/ns_hotplug_stress.sh@44 -- # wait 1581389 00:09:39.272 00:43:31 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:09:39.272 00:43:31 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:09:39.272 00:43:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:39.272 00:43:31 -- nvmf/common.sh@117 -- # sync 00:09:39.272 00:43:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:39.272 00:43:31 -- nvmf/common.sh@120 -- # set +e 00:09:39.272 00:43:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:39.272 00:43:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:39.272 rmmod nvme_tcp 00:09:39.272 rmmod nvme_fabrics 00:09:39.272 rmmod nvme_keyring 00:09:39.531 00:43:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:39.531 00:43:31 -- nvmf/common.sh@124 -- # set -e 00:09:39.531 00:43:31 -- nvmf/common.sh@125 -- # return 0 00:09:39.531 00:43:31 -- nvmf/common.sh@478 -- # '[' -n 1581038 ']' 00:09:39.531 00:43:31 -- nvmf/common.sh@479 -- # killprocess 1581038 00:09:39.531 00:43:31 -- common/autotest_common.sh@936 -- # '[' -z 1581038 ']' 00:09:39.531 00:43:31 -- common/autotest_common.sh@940 -- # kill -0 1581038 00:09:39.531 00:43:31 -- common/autotest_common.sh@941 -- # uname 00:09:39.531 00:43:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:39.531 00:43:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1581038 00:09:39.531 00:43:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:39.531 00:43:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:39.531 00:43:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1581038' 00:09:39.531 killing process with pid 1581038 00:09:39.531 00:43:32 -- common/autotest_common.sh@955 -- # kill 1581038 00:09:39.531 00:43:32 -- common/autotest_common.sh@960 -- # wait 1581038 00:09:39.791 00:43:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:39.791 00:43:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:39.791 00:43:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:39.791 00:43:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:39.791 00:43:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:39.791 00:43:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.791 00:43:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:39.791 00:43:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.709 00:43:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:41.709 00:09:41.709 real 0m41.485s 00:09:41.709 user 2m35.563s 00:09:41.709 sys 0m12.594s 00:09:41.709 00:43:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:41.709 00:43:34 -- common/autotest_common.sh@10 -- # set +x 00:09:41.709 ************************************ 00:09:41.709 END TEST nvmf_ns_hotplug_stress 00:09:41.709 ************************************ 00:09:41.709 00:43:34 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:41.709 00:43:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:41.709 00:43:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.709 00:43:34 -- common/autotest_common.sh@10 -- # set +x 00:09:41.968 ************************************ 00:09:41.968 START TEST nvmf_connect_stress 00:09:41.968 ************************************ 00:09:41.968 00:43:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:41.968 * Looking for test storage... 00:09:41.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.968 00:43:34 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.969 00:43:34 -- nvmf/common.sh@7 -- # uname -s 00:09:41.969 00:43:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.969 00:43:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.969 00:43:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.969 00:43:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.969 00:43:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.969 00:43:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.969 00:43:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.969 00:43:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.969 00:43:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.969 00:43:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.969 00:43:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:41.969 00:43:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:41.969 00:43:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.969 00:43:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.969 00:43:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.969 00:43:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.969 00:43:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.969 00:43:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.969 00:43:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.969 00:43:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.969 00:43:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.969 00:43:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.969 00:43:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.969 00:43:34 -- paths/export.sh@5 -- # export PATH 00:09:41.969 00:43:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.969 00:43:34 -- nvmf/common.sh@47 -- # : 0 00:09:41.969 00:43:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.969 00:43:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.969 00:43:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.969 00:43:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.969 00:43:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.969 00:43:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.969 00:43:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.969 00:43:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.969 00:43:34 -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:41.969 00:43:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:41.969 00:43:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.969 00:43:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:41.969 00:43:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:41.969 00:43:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:41.969 00:43:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.969 00:43:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:41.969 00:43:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.969 00:43:34 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:41.969 00:43:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:41.969 00:43:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:41.969 00:43:34 -- common/autotest_common.sh@10 -- # set +x 00:09:47.240 00:43:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:47.240 00:43:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.240 00:43:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.240 00:43:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.240 00:43:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.240 00:43:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.240 00:43:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.240 00:43:39 -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.240 00:43:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.240 00:43:39 -- nvmf/common.sh@296 -- # e810=() 00:09:47.240 00:43:39 -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.240 00:43:39 -- nvmf/common.sh@297 -- # x722=() 00:09:47.240 00:43:39 -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.240 00:43:39 -- nvmf/common.sh@298 -- # mlx=() 00:09:47.240 00:43:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.240 00:43:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.240 00:43:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.240 00:43:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.240 00:43:39 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.240 00:43:39 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.240 00:43:39 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.240 00:43:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.241 00:43:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.241 00:43:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:47.241 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:47.241 00:43:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.241 00:43:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:47.241 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:47.241 00:43:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.241 00:43:39 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.241 00:43:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.241 00:43:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:47.241 00:43:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.241 00:43:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:47.241 Found net devices under 0000:86:00.0: cvl_0_0 00:09:47.241 00:43:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.241 00:43:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.241 00:43:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.241 00:43:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:47.241 00:43:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.241 00:43:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:47.241 Found net devices under 0000:86:00.1: cvl_0_1 00:09:47.241 00:43:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.241 00:43:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:47.241 00:43:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:47.241 00:43:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:47.241 00:43:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.241 00:43:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.241 00:43:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.241 00:43:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.241 00:43:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.241 00:43:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.241 00:43:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.241 00:43:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.241 00:43:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.241 00:43:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.241 00:43:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.241 00:43:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.241 00:43:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.241 00:43:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.241 00:43:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.241 00:43:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.241 00:43:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.241 00:43:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.241 00:43:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.241 00:43:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:09:47.241 00:09:47.241 --- 10.0.0.2 ping statistics --- 00:09:47.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.241 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:47.241 00:43:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:09:47.241 00:09:47.241 --- 10.0.0.1 ping statistics --- 00:09:47.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.241 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:09:47.241 00:43:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.241 00:43:39 -- nvmf/common.sh@411 -- # return 0 00:09:47.241 00:43:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:47.241 00:43:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.241 00:43:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:47.241 00:43:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.241 00:43:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:47.241 00:43:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:47.241 00:43:39 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:47.241 00:43:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:47.241 00:43:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:47.241 00:43:39 -- common/autotest_common.sh@10 -- # set +x 00:09:47.241 00:43:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:47.241 00:43:39 -- nvmf/common.sh@470 -- # nvmfpid=1590126 00:09:47.241 00:43:39 -- nvmf/common.sh@471 -- # waitforlisten 1590126 00:09:47.241 00:43:39 -- common/autotest_common.sh@817 -- # '[' -z 1590126 ']' 00:09:47.241 00:43:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.241 00:43:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:47.241 00:43:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.241 00:43:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:47.241 00:43:39 -- common/autotest_common.sh@10 -- # set +x 00:09:47.241 [2024-04-27 00:43:39.873264] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:09:47.241 [2024-04-27 00:43:39.873305] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.241 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.241 [2024-04-27 00:43:39.933143] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.501 [2024-04-27 00:43:40.015011] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.501 [2024-04-27 00:43:40.015050] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.501 [2024-04-27 00:43:40.015057] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.501 [2024-04-27 00:43:40.015063] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.501 [2024-04-27 00:43:40.015074] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.501 [2024-04-27 00:43:40.015173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.501 [2024-04-27 00:43:40.015391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.501 [2024-04-27 00:43:40.015393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.068 00:43:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:48.068 00:43:40 -- common/autotest_common.sh@850 -- # return 0 00:09:48.068 00:43:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:48.068 00:43:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:48.068 00:43:40 -- common/autotest_common.sh@10 -- # set +x 00:09:48.068 00:43:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.068 00:43:40 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.068 00:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.068 00:43:40 -- common/autotest_common.sh@10 -- # set +x 00:09:48.068 [2024-04-27 00:43:40.721492] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.068 00:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.068 00:43:40 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:48.068 00:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.068 00:43:40 -- common/autotest_common.sh@10 -- # set +x 00:09:48.068 00:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.068 00:43:40 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.068 00:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.068 00:43:40 -- common/autotest_common.sh@10 -- # set +x 00:09:48.068 [2024-04-27 00:43:40.759179] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.343 00:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.343 00:43:40 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:48.343 00:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.343 00:43:40 -- common/autotest_common.sh@10 -- # set +x 00:09:48.343 NULL1 00:09:48.343 00:43:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.343 00:43:40 -- target/connect_stress.sh@21 -- # PERF_PID=1590283 00:09:48.343 00:43:40 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:48.343 00:43:40 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:48.343 00:43:40 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:48.343 00:43:40 -- target/connect_stress.sh@27 -- # seq 1 20 00:09:48.343 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.343 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.343 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.344 00:43:40 -- target/connect_stress.sh@28 -- # cat 00:09:48.344 00:43:40 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:48.344 00:43:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.344 00:43:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.344 00:43:40 -- common/autotest_common.sh@10 -- # set +x 00:09:48.640 00:43:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.640 00:43:41 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:48.640 00:43:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.640 00:43:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.640 00:43:41 -- common/autotest_common.sh@10 -- # set +x 00:09:48.913 00:43:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:48.913 00:43:41 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:48.913 00:43:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.913 00:43:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:48.914 00:43:41 -- common/autotest_common.sh@10 -- # set +x 00:09:49.172 00:43:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.172 00:43:41 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:49.172 00:43:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.172 00:43:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.172 00:43:41 -- common/autotest_common.sh@10 -- # set +x 00:09:49.740 00:43:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.740 00:43:42 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:49.740 00:43:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.740 00:43:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.740 00:43:42 -- common/autotest_common.sh@10 -- # set +x 00:09:49.999 00:43:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:49.999 00:43:42 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:49.999 00:43:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.999 00:43:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:49.999 00:43:42 -- common/autotest_common.sh@10 -- # set +x 00:09:50.258 00:43:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.258 00:43:42 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:50.258 00:43:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.258 00:43:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.258 00:43:42 -- common/autotest_common.sh@10 -- # set +x 00:09:50.517 00:43:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.517 00:43:43 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:50.517 00:43:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.517 00:43:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.517 00:43:43 -- common/autotest_common.sh@10 -- # set +x 00:09:50.776 00:43:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:50.776 00:43:43 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:50.776 00:43:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.776 00:43:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:50.776 00:43:43 -- common/autotest_common.sh@10 -- # set +x 00:09:51.344 00:43:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.344 00:43:43 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:51.344 00:43:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.344 00:43:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.344 00:43:43 -- common/autotest_common.sh@10 -- # set +x 00:09:51.603 00:43:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.603 00:43:44 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:51.603 00:43:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.603 00:43:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.603 00:43:44 -- common/autotest_common.sh@10 -- # set +x 00:09:51.862 00:43:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.862 00:43:44 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:51.862 00:43:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.862 00:43:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.862 00:43:44 -- common/autotest_common.sh@10 -- # set +x 00:09:52.121 00:43:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:52.121 00:43:44 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:52.121 00:43:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:52.121 00:43:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:52.121 00:43:44 -- common/autotest_common.sh@10 -- # set +x 00:09:52.379 00:43:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:52.379 00:43:45 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:52.379 00:43:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:52.379 00:43:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:52.379 00:43:45 -- common/autotest_common.sh@10 -- # set +x 00:09:52.946 00:43:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:52.946 00:43:45 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:52.946 00:43:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:52.946 00:43:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:52.946 00:43:45 -- common/autotest_common.sh@10 -- # set +x 00:09:53.204 00:43:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.204 00:43:45 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:53.204 00:43:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:53.204 00:43:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.204 00:43:45 -- common/autotest_common.sh@10 -- # set +x 00:09:53.463 00:43:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.463 00:43:46 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:53.463 00:43:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:53.463 00:43:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.463 00:43:46 -- common/autotest_common.sh@10 -- # set +x 00:09:53.721 00:43:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:53.721 00:43:46 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:53.721 00:43:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:53.721 00:43:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:53.721 00:43:46 -- common/autotest_common.sh@10 -- # set +x 00:09:54.288 00:43:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.288 00:43:46 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:54.288 00:43:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:54.288 00:43:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.288 00:43:46 -- common/autotest_common.sh@10 -- # set +x 00:09:54.547 00:43:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.547 00:43:47 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:54.547 00:43:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:54.547 00:43:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.547 00:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:54.805 00:43:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.805 00:43:47 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:54.805 00:43:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:54.805 00:43:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.805 00:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:55.063 00:43:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.063 00:43:47 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:55.063 00:43:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.063 00:43:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.063 00:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:55.322 00:43:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.322 00:43:47 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:55.322 00:43:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.322 00:43:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.322 00:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:55.888 00:43:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.888 00:43:48 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:55.888 00:43:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.888 00:43:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.888 00:43:48 -- common/autotest_common.sh@10 -- # set +x 00:09:56.146 00:43:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:56.146 00:43:48 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:56.146 00:43:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.146 00:43:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:56.146 00:43:48 -- common/autotest_common.sh@10 -- # set +x 00:09:56.404 00:43:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:56.404 00:43:48 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:56.404 00:43:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.404 00:43:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:56.404 00:43:48 -- common/autotest_common.sh@10 -- # set +x 00:09:56.662 00:43:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:56.662 00:43:49 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:56.662 00:43:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.662 00:43:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:56.662 00:43:49 -- common/autotest_common.sh@10 -- # set +x 00:09:56.921 00:43:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:56.921 00:43:49 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:56.921 00:43:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.921 00:43:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:56.921 00:43:49 -- common/autotest_common.sh@10 -- # set +x 00:09:57.489 00:43:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.489 00:43:49 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:57.489 00:43:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.489 00:43:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.489 00:43:49 -- common/autotest_common.sh@10 -- # set +x 00:09:57.747 00:43:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.747 00:43:50 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:57.747 00:43:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.747 00:43:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.747 00:43:50 -- common/autotest_common.sh@10 -- # set +x 00:09:58.005 00:43:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.005 00:43:50 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:58.005 00:43:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:58.005 00:43:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:58.005 00:43:50 -- common/autotest_common.sh@10 -- # set +x 00:09:58.264 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:58.264 00:43:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:58.264 00:43:50 -- target/connect_stress.sh@34 -- # kill -0 1590283 00:09:58.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1590283) - No such process 00:09:58.264 00:43:50 -- target/connect_stress.sh@38 -- # wait 1590283 00:09:58.264 00:43:50 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:58.264 00:43:50 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:58.264 00:43:50 -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:58.264 00:43:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:58.264 00:43:50 -- nvmf/common.sh@117 -- # sync 00:09:58.264 00:43:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:58.264 00:43:50 -- nvmf/common.sh@120 -- # set +e 00:09:58.264 00:43:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:58.264 00:43:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:58.264 rmmod nvme_tcp 00:09:58.264 rmmod nvme_fabrics 00:09:58.264 rmmod nvme_keyring 00:09:58.523 00:43:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:58.523 00:43:50 -- nvmf/common.sh@124 -- # set -e 00:09:58.523 00:43:50 -- nvmf/common.sh@125 -- # return 0 00:09:58.523 00:43:50 -- nvmf/common.sh@478 -- # '[' -n 1590126 ']' 00:09:58.523 00:43:50 -- nvmf/common.sh@479 -- # killprocess 1590126 00:09:58.523 00:43:50 -- common/autotest_common.sh@936 -- # '[' -z 1590126 ']' 00:09:58.523 00:43:50 -- common/autotest_common.sh@940 -- # kill -0 1590126 00:09:58.523 00:43:50 -- common/autotest_common.sh@941 -- # uname 00:09:58.523 00:43:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:58.523 00:43:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1590126 00:09:58.523 00:43:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:58.523 00:43:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:58.523 00:43:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1590126' 00:09:58.523 killing process with pid 1590126 00:09:58.523 00:43:51 -- common/autotest_common.sh@955 -- # kill 1590126 00:09:58.523 00:43:51 -- common/autotest_common.sh@960 -- # wait 1590126 00:09:58.781 00:43:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:58.781 00:43:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:58.781 00:43:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:58.781 00:43:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.781 00:43:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.781 00:43:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.781 00:43:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.781 00:43:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.688 00:43:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.688 00:10:00.688 real 0m18.805s 00:10:00.688 user 0m40.670s 00:10:00.688 sys 0m7.938s 00:10:00.688 00:43:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:00.688 00:43:53 -- common/autotest_common.sh@10 -- # set +x 00:10:00.688 ************************************ 00:10:00.688 END TEST nvmf_connect_stress 00:10:00.688 ************************************ 00:10:00.688 00:43:53 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:00.688 00:43:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:00.688 00:43:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:00.688 00:43:53 -- common/autotest_common.sh@10 -- # set +x 00:10:00.948 ************************************ 00:10:00.948 START TEST nvmf_fused_ordering 00:10:00.948 ************************************ 00:10:00.948 00:43:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:00.948 * Looking for test storage... 00:10:00.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.948 00:43:53 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.948 00:43:53 -- nvmf/common.sh@7 -- # uname -s 00:10:00.948 00:43:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.948 00:43:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.948 00:43:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.948 00:43:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.948 00:43:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.948 00:43:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.948 00:43:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.948 00:43:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.948 00:43:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.948 00:43:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.948 00:43:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.948 00:43:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:00.948 00:43:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.948 00:43:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.948 00:43:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.948 00:43:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.948 00:43:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.948 00:43:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.948 00:43:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.948 00:43:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.948 00:43:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.948 00:43:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.948 00:43:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.948 00:43:53 -- paths/export.sh@5 -- # export PATH 00:10:00.948 00:43:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.948 00:43:53 -- nvmf/common.sh@47 -- # : 0 00:10:00.948 00:43:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.948 00:43:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.948 00:43:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.948 00:43:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.948 00:43:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.948 00:43:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.948 00:43:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.948 00:43:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.948 00:43:53 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:00.948 00:43:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:00.948 00:43:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.948 00:43:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:00.948 00:43:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:00.948 00:43:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:00.948 00:43:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.948 00:43:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.948 00:43:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.948 00:43:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:00.948 00:43:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:00.948 00:43:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.948 00:43:53 -- common/autotest_common.sh@10 -- # set +x 00:10:06.221 00:43:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:06.221 00:43:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.221 00:43:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.221 00:43:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.221 00:43:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.221 00:43:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.221 00:43:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.221 00:43:58 -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.221 00:43:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.221 00:43:58 -- nvmf/common.sh@296 -- # e810=() 00:10:06.221 00:43:58 -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.221 00:43:58 -- nvmf/common.sh@297 -- # x722=() 00:10:06.221 00:43:58 -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.221 00:43:58 -- nvmf/common.sh@298 -- # mlx=() 00:10:06.221 00:43:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.221 00:43:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.221 00:43:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.221 00:43:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.221 00:43:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.221 00:43:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.221 00:43:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:06.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:06.221 00:43:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.221 00:43:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:06.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:06.221 00:43:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.221 00:43:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.221 00:43:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.221 00:43:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.221 00:43:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:06.221 00:43:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.221 00:43:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:06.221 Found net devices under 0000:86:00.0: cvl_0_0 00:10:06.221 00:43:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.221 00:43:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.221 00:43:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.221 00:43:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:06.221 00:43:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.222 00:43:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:06.222 Found net devices under 0000:86:00.1: cvl_0_1 00:10:06.222 00:43:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.222 00:43:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:06.222 00:43:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:06.222 00:43:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:06.222 00:43:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:06.222 00:43:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:06.222 00:43:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.222 00:43:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.222 00:43:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.222 00:43:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.222 00:43:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.222 00:43:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.222 00:43:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.222 00:43:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.222 00:43:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.222 00:43:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.222 00:43:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.222 00:43:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.222 00:43:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.222 00:43:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.222 00:43:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.222 00:43:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.222 00:43:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.222 00:43:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.222 00:43:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.222 00:43:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:10:06.222 00:10:06.222 --- 10.0.0.2 ping statistics --- 00:10:06.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.222 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:06.222 00:43:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:10:06.222 00:10:06.222 --- 10.0.0.1 ping statistics --- 00:10:06.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.222 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:10:06.222 00:43:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.222 00:43:58 -- nvmf/common.sh@411 -- # return 0 00:10:06.222 00:43:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:06.222 00:43:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.222 00:43:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:06.222 00:43:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:06.222 00:43:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.222 00:43:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:06.222 00:43:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:06.222 00:43:58 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:06.222 00:43:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:06.222 00:43:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:06.222 00:43:58 -- common/autotest_common.sh@10 -- # set +x 00:10:06.222 00:43:58 -- nvmf/common.sh@470 -- # nvmfpid=1595439 00:10:06.222 00:43:58 -- nvmf/common.sh@471 -- # waitforlisten 1595439 00:10:06.222 00:43:58 -- common/autotest_common.sh@817 -- # '[' -z 1595439 ']' 00:10:06.222 00:43:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.222 00:43:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:06.222 00:43:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.222 00:43:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:06.222 00:43:58 -- common/autotest_common.sh@10 -- # set +x 00:10:06.222 00:43:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:06.222 [2024-04-27 00:43:58.803002] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:10:06.222 [2024-04-27 00:43:58.803043] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.222 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.222 [2024-04-27 00:43:58.858409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.480 [2024-04-27 00:43:58.937230] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.480 [2024-04-27 00:43:58.937262] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.480 [2024-04-27 00:43:58.937269] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.480 [2024-04-27 00:43:58.937275] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.480 [2024-04-27 00:43:58.937280] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.480 [2024-04-27 00:43:58.937299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.044 00:43:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:07.044 00:43:59 -- common/autotest_common.sh@850 -- # return 0 00:10:07.044 00:43:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:07.044 00:43:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:07.044 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 00:43:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.044 00:43:59 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.044 00:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.044 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 [2024-04-27 00:43:59.624005] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.044 00:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.044 00:43:59 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:07.044 00:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.044 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 00:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.044 00:43:59 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.044 00:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.044 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 [2024-04-27 00:43:59.640144] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.044 00:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.044 00:43:59 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:07.044 00:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.044 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 NULL1 00:10:07.044 00:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.044 00:43:59 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:07.044 00:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.044 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 00:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.044 00:43:59 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:07.044 00:43:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:07.044 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.044 00:43:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:07.044 00:43:59 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:07.044 [2024-04-27 00:43:59.681882] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:10:07.044 [2024-04-27 00:43:59.681903] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595685 ] 00:10:07.045 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.978 Attached to nqn.2016-06.io.spdk:cnode1 00:10:07.978 Namespace ID: 1 size: 1GB 00:10:07.978 fused_ordering(0) 00:10:07.978 fused_ordering(1) 00:10:07.978 fused_ordering(2) 00:10:07.978 fused_ordering(3) 00:10:07.978 fused_ordering(4) 00:10:07.978 fused_ordering(5) 00:10:07.978 fused_ordering(6) 00:10:07.978 fused_ordering(7) 00:10:07.978 fused_ordering(8) 00:10:07.978 fused_ordering(9) 00:10:07.978 fused_ordering(10) 00:10:07.978 fused_ordering(11) 00:10:07.978 fused_ordering(12) 00:10:07.978 fused_ordering(13) 00:10:07.978 fused_ordering(14) 00:10:07.978 fused_ordering(15) 00:10:07.978 fused_ordering(16) 00:10:07.978 fused_ordering(17) 00:10:07.978 fused_ordering(18) 00:10:07.978 fused_ordering(19) 00:10:07.978 fused_ordering(20) 00:10:07.978 fused_ordering(21) 00:10:07.978 fused_ordering(22) 00:10:07.978 fused_ordering(23) 00:10:07.978 fused_ordering(24) 00:10:07.978 fused_ordering(25) 00:10:07.978 fused_ordering(26) 00:10:07.978 fused_ordering(27) 00:10:07.978 fused_ordering(28) 00:10:07.978 fused_ordering(29) 00:10:07.978 fused_ordering(30) 00:10:07.978 fused_ordering(31) 00:10:07.978 fused_ordering(32) 00:10:07.978 fused_ordering(33) 00:10:07.978 fused_ordering(34) 00:10:07.978 fused_ordering(35) 00:10:07.978 fused_ordering(36) 00:10:07.978 fused_ordering(37) 00:10:07.978 fused_ordering(38) 00:10:07.978 fused_ordering(39) 00:10:07.978 fused_ordering(40) 00:10:07.978 fused_ordering(41) 00:10:07.978 fused_ordering(42) 00:10:07.978 fused_ordering(43) 00:10:07.978 fused_ordering(44) 00:10:07.978 fused_ordering(45) 00:10:07.978 fused_ordering(46) 00:10:07.978 fused_ordering(47) 00:10:07.978 fused_ordering(48) 00:10:07.978 fused_ordering(49) 00:10:07.978 fused_ordering(50) 00:10:07.978 fused_ordering(51) 00:10:07.978 fused_ordering(52) 00:10:07.978 fused_ordering(53) 00:10:07.978 fused_ordering(54) 00:10:07.978 fused_ordering(55) 00:10:07.978 fused_ordering(56) 00:10:07.978 fused_ordering(57) 00:10:07.978 fused_ordering(58) 00:10:07.978 fused_ordering(59) 00:10:07.978 fused_ordering(60) 00:10:07.978 fused_ordering(61) 00:10:07.978 fused_ordering(62) 00:10:07.978 fused_ordering(63) 00:10:07.978 fused_ordering(64) 00:10:07.978 fused_ordering(65) 00:10:07.978 fused_ordering(66) 00:10:07.978 fused_ordering(67) 00:10:07.978 fused_ordering(68) 00:10:07.978 fused_ordering(69) 00:10:07.978 fused_ordering(70) 00:10:07.978 fused_ordering(71) 00:10:07.978 fused_ordering(72) 00:10:07.978 fused_ordering(73) 00:10:07.978 fused_ordering(74) 00:10:07.978 fused_ordering(75) 00:10:07.978 fused_ordering(76) 00:10:07.978 fused_ordering(77) 00:10:07.978 fused_ordering(78) 00:10:07.978 fused_ordering(79) 00:10:07.978 fused_ordering(80) 00:10:07.978 fused_ordering(81) 00:10:07.978 fused_ordering(82) 00:10:07.978 fused_ordering(83) 00:10:07.978 fused_ordering(84) 00:10:07.978 fused_ordering(85) 00:10:07.978 fused_ordering(86) 00:10:07.978 fused_ordering(87) 00:10:07.978 fused_ordering(88) 00:10:07.978 fused_ordering(89) 00:10:07.978 fused_ordering(90) 00:10:07.978 fused_ordering(91) 00:10:07.978 fused_ordering(92) 00:10:07.978 fused_ordering(93) 00:10:07.978 fused_ordering(94) 00:10:07.978 fused_ordering(95) 00:10:07.978 fused_ordering(96) 00:10:07.978 fused_ordering(97) 00:10:07.978 fused_ordering(98) 00:10:07.978 fused_ordering(99) 00:10:07.978 fused_ordering(100) 00:10:07.978 fused_ordering(101) 00:10:07.978 fused_ordering(102) 00:10:07.978 fused_ordering(103) 00:10:07.978 fused_ordering(104) 00:10:07.978 fused_ordering(105) 00:10:07.978 fused_ordering(106) 00:10:07.978 fused_ordering(107) 00:10:07.978 fused_ordering(108) 00:10:07.979 fused_ordering(109) 00:10:07.979 fused_ordering(110) 00:10:07.979 fused_ordering(111) 00:10:07.979 fused_ordering(112) 00:10:07.979 fused_ordering(113) 00:10:07.979 fused_ordering(114) 00:10:07.979 fused_ordering(115) 00:10:07.979 fused_ordering(116) 00:10:07.979 fused_ordering(117) 00:10:07.979 fused_ordering(118) 00:10:07.979 fused_ordering(119) 00:10:07.979 fused_ordering(120) 00:10:07.979 fused_ordering(121) 00:10:07.979 fused_ordering(122) 00:10:07.979 fused_ordering(123) 00:10:07.979 fused_ordering(124) 00:10:07.979 fused_ordering(125) 00:10:07.979 fused_ordering(126) 00:10:07.979 fused_ordering(127) 00:10:07.979 fused_ordering(128) 00:10:07.979 fused_ordering(129) 00:10:07.979 fused_ordering(130) 00:10:07.979 fused_ordering(131) 00:10:07.979 fused_ordering(132) 00:10:07.979 fused_ordering(133) 00:10:07.979 fused_ordering(134) 00:10:07.979 fused_ordering(135) 00:10:07.979 fused_ordering(136) 00:10:07.979 fused_ordering(137) 00:10:07.979 fused_ordering(138) 00:10:07.979 fused_ordering(139) 00:10:07.979 fused_ordering(140) 00:10:07.979 fused_ordering(141) 00:10:07.979 fused_ordering(142) 00:10:07.979 fused_ordering(143) 00:10:07.979 fused_ordering(144) 00:10:07.979 fused_ordering(145) 00:10:07.979 fused_ordering(146) 00:10:07.979 fused_ordering(147) 00:10:07.979 fused_ordering(148) 00:10:07.979 fused_ordering(149) 00:10:07.979 fused_ordering(150) 00:10:07.979 fused_ordering(151) 00:10:07.979 fused_ordering(152) 00:10:07.979 fused_ordering(153) 00:10:07.979 fused_ordering(154) 00:10:07.979 fused_ordering(155) 00:10:07.979 fused_ordering(156) 00:10:07.979 fused_ordering(157) 00:10:07.979 fused_ordering(158) 00:10:07.979 fused_ordering(159) 00:10:07.979 fused_ordering(160) 00:10:07.979 fused_ordering(161) 00:10:07.979 fused_ordering(162) 00:10:07.979 fused_ordering(163) 00:10:07.979 fused_ordering(164) 00:10:07.979 fused_ordering(165) 00:10:07.979 fused_ordering(166) 00:10:07.979 fused_ordering(167) 00:10:07.979 fused_ordering(168) 00:10:07.979 fused_ordering(169) 00:10:07.979 fused_ordering(170) 00:10:07.979 fused_ordering(171) 00:10:07.979 fused_ordering(172) 00:10:07.979 fused_ordering(173) 00:10:07.979 fused_ordering(174) 00:10:07.979 fused_ordering(175) 00:10:07.979 fused_ordering(176) 00:10:07.979 fused_ordering(177) 00:10:07.979 fused_ordering(178) 00:10:07.979 fused_ordering(179) 00:10:07.979 fused_ordering(180) 00:10:07.979 fused_ordering(181) 00:10:07.979 fused_ordering(182) 00:10:07.979 fused_ordering(183) 00:10:07.979 fused_ordering(184) 00:10:07.979 fused_ordering(185) 00:10:07.979 fused_ordering(186) 00:10:07.979 fused_ordering(187) 00:10:07.979 fused_ordering(188) 00:10:07.979 fused_ordering(189) 00:10:07.979 fused_ordering(190) 00:10:07.979 fused_ordering(191) 00:10:07.979 fused_ordering(192) 00:10:07.979 fused_ordering(193) 00:10:07.979 fused_ordering(194) 00:10:07.979 fused_ordering(195) 00:10:07.979 fused_ordering(196) 00:10:07.979 fused_ordering(197) 00:10:07.979 fused_ordering(198) 00:10:07.979 fused_ordering(199) 00:10:07.979 fused_ordering(200) 00:10:07.979 fused_ordering(201) 00:10:07.979 fused_ordering(202) 00:10:07.979 fused_ordering(203) 00:10:07.979 fused_ordering(204) 00:10:07.979 fused_ordering(205) 00:10:08.915 fused_ordering(206) 00:10:08.915 fused_ordering(207) 00:10:08.915 fused_ordering(208) 00:10:08.915 fused_ordering(209) 00:10:08.915 fused_ordering(210) 00:10:08.915 fused_ordering(211) 00:10:08.915 fused_ordering(212) 00:10:08.915 fused_ordering(213) 00:10:08.915 fused_ordering(214) 00:10:08.915 fused_ordering(215) 00:10:08.915 fused_ordering(216) 00:10:08.915 fused_ordering(217) 00:10:08.915 fused_ordering(218) 00:10:08.915 fused_ordering(219) 00:10:08.915 fused_ordering(220) 00:10:08.915 fused_ordering(221) 00:10:08.915 fused_ordering(222) 00:10:08.915 fused_ordering(223) 00:10:08.915 fused_ordering(224) 00:10:08.915 fused_ordering(225) 00:10:08.915 fused_ordering(226) 00:10:08.915 fused_ordering(227) 00:10:08.915 fused_ordering(228) 00:10:08.915 fused_ordering(229) 00:10:08.915 fused_ordering(230) 00:10:08.915 fused_ordering(231) 00:10:08.915 fused_ordering(232) 00:10:08.915 fused_ordering(233) 00:10:08.915 fused_ordering(234) 00:10:08.915 fused_ordering(235) 00:10:08.915 fused_ordering(236) 00:10:08.915 fused_ordering(237) 00:10:08.915 fused_ordering(238) 00:10:08.915 fused_ordering(239) 00:10:08.915 fused_ordering(240) 00:10:08.915 fused_ordering(241) 00:10:08.915 fused_ordering(242) 00:10:08.915 fused_ordering(243) 00:10:08.915 fused_ordering(244) 00:10:08.915 fused_ordering(245) 00:10:08.915 fused_ordering(246) 00:10:08.915 fused_ordering(247) 00:10:08.915 fused_ordering(248) 00:10:08.915 fused_ordering(249) 00:10:08.915 fused_ordering(250) 00:10:08.915 fused_ordering(251) 00:10:08.915 fused_ordering(252) 00:10:08.915 fused_ordering(253) 00:10:08.915 fused_ordering(254) 00:10:08.915 fused_ordering(255) 00:10:08.915 fused_ordering(256) 00:10:08.915 fused_ordering(257) 00:10:08.915 fused_ordering(258) 00:10:08.915 fused_ordering(259) 00:10:08.915 fused_ordering(260) 00:10:08.915 fused_ordering(261) 00:10:08.915 fused_ordering(262) 00:10:08.915 fused_ordering(263) 00:10:08.915 fused_ordering(264) 00:10:08.915 fused_ordering(265) 00:10:08.915 fused_ordering(266) 00:10:08.915 fused_ordering(267) 00:10:08.915 fused_ordering(268) 00:10:08.915 fused_ordering(269) 00:10:08.915 fused_ordering(270) 00:10:08.915 fused_ordering(271) 00:10:08.915 fused_ordering(272) 00:10:08.915 fused_ordering(273) 00:10:08.915 fused_ordering(274) 00:10:08.915 fused_ordering(275) 00:10:08.915 fused_ordering(276) 00:10:08.915 fused_ordering(277) 00:10:08.915 fused_ordering(278) 00:10:08.915 fused_ordering(279) 00:10:08.915 fused_ordering(280) 00:10:08.915 fused_ordering(281) 00:10:08.915 fused_ordering(282) 00:10:08.915 fused_ordering(283) 00:10:08.915 fused_ordering(284) 00:10:08.915 fused_ordering(285) 00:10:08.915 fused_ordering(286) 00:10:08.915 fused_ordering(287) 00:10:08.915 fused_ordering(288) 00:10:08.915 fused_ordering(289) 00:10:08.915 fused_ordering(290) 00:10:08.915 fused_ordering(291) 00:10:08.915 fused_ordering(292) 00:10:08.915 fused_ordering(293) 00:10:08.915 fused_ordering(294) 00:10:08.915 fused_ordering(295) 00:10:08.916 fused_ordering(296) 00:10:08.916 fused_ordering(297) 00:10:08.916 fused_ordering(298) 00:10:08.916 fused_ordering(299) 00:10:08.916 fused_ordering(300) 00:10:08.916 fused_ordering(301) 00:10:08.916 fused_ordering(302) 00:10:08.916 fused_ordering(303) 00:10:08.916 fused_ordering(304) 00:10:08.916 fused_ordering(305) 00:10:08.916 fused_ordering(306) 00:10:08.916 fused_ordering(307) 00:10:08.916 fused_ordering(308) 00:10:08.916 fused_ordering(309) 00:10:08.916 fused_ordering(310) 00:10:08.916 fused_ordering(311) 00:10:08.916 fused_ordering(312) 00:10:08.916 fused_ordering(313) 00:10:08.916 fused_ordering(314) 00:10:08.916 fused_ordering(315) 00:10:08.916 fused_ordering(316) 00:10:08.916 fused_ordering(317) 00:10:08.916 fused_ordering(318) 00:10:08.916 fused_ordering(319) 00:10:08.916 fused_ordering(320) 00:10:08.916 fused_ordering(321) 00:10:08.916 fused_ordering(322) 00:10:08.916 fused_ordering(323) 00:10:08.916 fused_ordering(324) 00:10:08.916 fused_ordering(325) 00:10:08.916 fused_ordering(326) 00:10:08.916 fused_ordering(327) 00:10:08.916 fused_ordering(328) 00:10:08.916 fused_ordering(329) 00:10:08.916 fused_ordering(330) 00:10:08.916 fused_ordering(331) 00:10:08.916 fused_ordering(332) 00:10:08.916 fused_ordering(333) 00:10:08.916 fused_ordering(334) 00:10:08.916 fused_ordering(335) 00:10:08.916 fused_ordering(336) 00:10:08.916 fused_ordering(337) 00:10:08.916 fused_ordering(338) 00:10:08.916 fused_ordering(339) 00:10:08.916 fused_ordering(340) 00:10:08.916 fused_ordering(341) 00:10:08.916 fused_ordering(342) 00:10:08.916 fused_ordering(343) 00:10:08.916 fused_ordering(344) 00:10:08.916 fused_ordering(345) 00:10:08.916 fused_ordering(346) 00:10:08.916 fused_ordering(347) 00:10:08.916 fused_ordering(348) 00:10:08.916 fused_ordering(349) 00:10:08.916 fused_ordering(350) 00:10:08.916 fused_ordering(351) 00:10:08.916 fused_ordering(352) 00:10:08.916 fused_ordering(353) 00:10:08.916 fused_ordering(354) 00:10:08.916 fused_ordering(355) 00:10:08.916 fused_ordering(356) 00:10:08.916 fused_ordering(357) 00:10:08.916 fused_ordering(358) 00:10:08.916 fused_ordering(359) 00:10:08.916 fused_ordering(360) 00:10:08.916 fused_ordering(361) 00:10:08.916 fused_ordering(362) 00:10:08.916 fused_ordering(363) 00:10:08.916 fused_ordering(364) 00:10:08.916 fused_ordering(365) 00:10:08.916 fused_ordering(366) 00:10:08.916 fused_ordering(367) 00:10:08.916 fused_ordering(368) 00:10:08.916 fused_ordering(369) 00:10:08.916 fused_ordering(370) 00:10:08.916 fused_ordering(371) 00:10:08.916 fused_ordering(372) 00:10:08.916 fused_ordering(373) 00:10:08.916 fused_ordering(374) 00:10:08.916 fused_ordering(375) 00:10:08.916 fused_ordering(376) 00:10:08.916 fused_ordering(377) 00:10:08.916 fused_ordering(378) 00:10:08.916 fused_ordering(379) 00:10:08.916 fused_ordering(380) 00:10:08.916 fused_ordering(381) 00:10:08.916 fused_ordering(382) 00:10:08.916 fused_ordering(383) 00:10:08.916 fused_ordering(384) 00:10:08.916 fused_ordering(385) 00:10:08.916 fused_ordering(386) 00:10:08.916 fused_ordering(387) 00:10:08.916 fused_ordering(388) 00:10:08.916 fused_ordering(389) 00:10:08.916 fused_ordering(390) 00:10:08.916 fused_ordering(391) 00:10:08.916 fused_ordering(392) 00:10:08.916 fused_ordering(393) 00:10:08.916 fused_ordering(394) 00:10:08.916 fused_ordering(395) 00:10:08.916 fused_ordering(396) 00:10:08.916 fused_ordering(397) 00:10:08.916 fused_ordering(398) 00:10:08.916 fused_ordering(399) 00:10:08.916 fused_ordering(400) 00:10:08.916 fused_ordering(401) 00:10:08.916 fused_ordering(402) 00:10:08.916 fused_ordering(403) 00:10:08.916 fused_ordering(404) 00:10:08.916 fused_ordering(405) 00:10:08.916 fused_ordering(406) 00:10:08.916 fused_ordering(407) 00:10:08.916 fused_ordering(408) 00:10:08.916 fused_ordering(409) 00:10:08.916 fused_ordering(410) 00:10:09.852 fused_ordering(411) 00:10:09.852 fused_ordering(412) 00:10:09.852 fused_ordering(413) 00:10:09.852 fused_ordering(414) 00:10:09.852 fused_ordering(415) 00:10:09.852 fused_ordering(416) 00:10:09.852 fused_ordering(417) 00:10:09.852 fused_ordering(418) 00:10:09.852 fused_ordering(419) 00:10:09.852 fused_ordering(420) 00:10:09.852 fused_ordering(421) 00:10:09.852 fused_ordering(422) 00:10:09.852 fused_ordering(423) 00:10:09.852 fused_ordering(424) 00:10:09.852 fused_ordering(425) 00:10:09.852 fused_ordering(426) 00:10:09.852 fused_ordering(427) 00:10:09.852 fused_ordering(428) 00:10:09.852 fused_ordering(429) 00:10:09.852 fused_ordering(430) 00:10:09.852 fused_ordering(431) 00:10:09.852 fused_ordering(432) 00:10:09.852 fused_ordering(433) 00:10:09.852 fused_ordering(434) 00:10:09.852 fused_ordering(435) 00:10:09.852 fused_ordering(436) 00:10:09.852 fused_ordering(437) 00:10:09.852 fused_ordering(438) 00:10:09.852 fused_ordering(439) 00:10:09.852 fused_ordering(440) 00:10:09.852 fused_ordering(441) 00:10:09.852 fused_ordering(442) 00:10:09.852 fused_ordering(443) 00:10:09.852 fused_ordering(444) 00:10:09.852 fused_ordering(445) 00:10:09.852 fused_ordering(446) 00:10:09.852 fused_ordering(447) 00:10:09.852 fused_ordering(448) 00:10:09.852 fused_ordering(449) 00:10:09.852 fused_ordering(450) 00:10:09.852 fused_ordering(451) 00:10:09.852 fused_ordering(452) 00:10:09.852 fused_ordering(453) 00:10:09.852 fused_ordering(454) 00:10:09.852 fused_ordering(455) 00:10:09.852 fused_ordering(456) 00:10:09.852 fused_ordering(457) 00:10:09.852 fused_ordering(458) 00:10:09.852 fused_ordering(459) 00:10:09.852 fused_ordering(460) 00:10:09.852 fused_ordering(461) 00:10:09.852 fused_ordering(462) 00:10:09.852 fused_ordering(463) 00:10:09.852 fused_ordering(464) 00:10:09.852 fused_ordering(465) 00:10:09.852 fused_ordering(466) 00:10:09.852 fused_ordering(467) 00:10:09.852 fused_ordering(468) 00:10:09.852 fused_ordering(469) 00:10:09.852 fused_ordering(470) 00:10:09.852 fused_ordering(471) 00:10:09.852 fused_ordering(472) 00:10:09.852 fused_ordering(473) 00:10:09.852 fused_ordering(474) 00:10:09.852 fused_ordering(475) 00:10:09.852 fused_ordering(476) 00:10:09.852 fused_ordering(477) 00:10:09.852 fused_ordering(478) 00:10:09.852 fused_ordering(479) 00:10:09.852 fused_ordering(480) 00:10:09.852 fused_ordering(481) 00:10:09.852 fused_ordering(482) 00:10:09.852 fused_ordering(483) 00:10:09.852 fused_ordering(484) 00:10:09.852 fused_ordering(485) 00:10:09.852 fused_ordering(486) 00:10:09.852 fused_ordering(487) 00:10:09.852 fused_ordering(488) 00:10:09.852 fused_ordering(489) 00:10:09.852 fused_ordering(490) 00:10:09.852 fused_ordering(491) 00:10:09.852 fused_ordering(492) 00:10:09.852 fused_ordering(493) 00:10:09.852 fused_ordering(494) 00:10:09.852 fused_ordering(495) 00:10:09.852 fused_ordering(496) 00:10:09.852 fused_ordering(497) 00:10:09.852 fused_ordering(498) 00:10:09.852 fused_ordering(499) 00:10:09.852 fused_ordering(500) 00:10:09.852 fused_ordering(501) 00:10:09.852 fused_ordering(502) 00:10:09.852 fused_ordering(503) 00:10:09.852 fused_ordering(504) 00:10:09.852 fused_ordering(505) 00:10:09.852 fused_ordering(506) 00:10:09.852 fused_ordering(507) 00:10:09.852 fused_ordering(508) 00:10:09.852 fused_ordering(509) 00:10:09.852 fused_ordering(510) 00:10:09.852 fused_ordering(511) 00:10:09.852 fused_ordering(512) 00:10:09.852 fused_ordering(513) 00:10:09.852 fused_ordering(514) 00:10:09.852 fused_ordering(515) 00:10:09.852 fused_ordering(516) 00:10:09.852 fused_ordering(517) 00:10:09.852 fused_ordering(518) 00:10:09.852 fused_ordering(519) 00:10:09.852 fused_ordering(520) 00:10:09.852 fused_ordering(521) 00:10:09.852 fused_ordering(522) 00:10:09.852 fused_ordering(523) 00:10:09.852 fused_ordering(524) 00:10:09.852 fused_ordering(525) 00:10:09.852 fused_ordering(526) 00:10:09.852 fused_ordering(527) 00:10:09.852 fused_ordering(528) 00:10:09.852 fused_ordering(529) 00:10:09.852 fused_ordering(530) 00:10:09.852 fused_ordering(531) 00:10:09.852 fused_ordering(532) 00:10:09.852 fused_ordering(533) 00:10:09.852 fused_ordering(534) 00:10:09.852 fused_ordering(535) 00:10:09.852 fused_ordering(536) 00:10:09.852 fused_ordering(537) 00:10:09.852 fused_ordering(538) 00:10:09.852 fused_ordering(539) 00:10:09.852 fused_ordering(540) 00:10:09.852 fused_ordering(541) 00:10:09.852 fused_ordering(542) 00:10:09.852 fused_ordering(543) 00:10:09.852 fused_ordering(544) 00:10:09.852 fused_ordering(545) 00:10:09.852 fused_ordering(546) 00:10:09.852 fused_ordering(547) 00:10:09.852 fused_ordering(548) 00:10:09.852 fused_ordering(549) 00:10:09.852 fused_ordering(550) 00:10:09.852 fused_ordering(551) 00:10:09.852 fused_ordering(552) 00:10:09.852 fused_ordering(553) 00:10:09.852 fused_ordering(554) 00:10:09.852 fused_ordering(555) 00:10:09.852 fused_ordering(556) 00:10:09.852 fused_ordering(557) 00:10:09.852 fused_ordering(558) 00:10:09.852 fused_ordering(559) 00:10:09.852 fused_ordering(560) 00:10:09.852 fused_ordering(561) 00:10:09.852 fused_ordering(562) 00:10:09.852 fused_ordering(563) 00:10:09.852 fused_ordering(564) 00:10:09.852 fused_ordering(565) 00:10:09.852 fused_ordering(566) 00:10:09.852 fused_ordering(567) 00:10:09.852 fused_ordering(568) 00:10:09.852 fused_ordering(569) 00:10:09.852 fused_ordering(570) 00:10:09.852 fused_ordering(571) 00:10:09.852 fused_ordering(572) 00:10:09.852 fused_ordering(573) 00:10:09.852 fused_ordering(574) 00:10:09.852 fused_ordering(575) 00:10:09.852 fused_ordering(576) 00:10:09.852 fused_ordering(577) 00:10:09.852 fused_ordering(578) 00:10:09.852 fused_ordering(579) 00:10:09.852 fused_ordering(580) 00:10:09.852 fused_ordering(581) 00:10:09.852 fused_ordering(582) 00:10:09.852 fused_ordering(583) 00:10:09.852 fused_ordering(584) 00:10:09.852 fused_ordering(585) 00:10:09.852 fused_ordering(586) 00:10:09.852 fused_ordering(587) 00:10:09.852 fused_ordering(588) 00:10:09.852 fused_ordering(589) 00:10:09.852 fused_ordering(590) 00:10:09.852 fused_ordering(591) 00:10:09.852 fused_ordering(592) 00:10:09.852 fused_ordering(593) 00:10:09.852 fused_ordering(594) 00:10:09.852 fused_ordering(595) 00:10:09.852 fused_ordering(596) 00:10:09.852 fused_ordering(597) 00:10:09.852 fused_ordering(598) 00:10:09.852 fused_ordering(599) 00:10:09.852 fused_ordering(600) 00:10:09.852 fused_ordering(601) 00:10:09.852 fused_ordering(602) 00:10:09.852 fused_ordering(603) 00:10:09.852 fused_ordering(604) 00:10:09.852 fused_ordering(605) 00:10:09.852 fused_ordering(606) 00:10:09.852 fused_ordering(607) 00:10:09.852 fused_ordering(608) 00:10:09.852 fused_ordering(609) 00:10:09.852 fused_ordering(610) 00:10:09.852 fused_ordering(611) 00:10:09.852 fused_ordering(612) 00:10:09.852 fused_ordering(613) 00:10:09.852 fused_ordering(614) 00:10:09.852 fused_ordering(615) 00:10:10.788 fused_ordering(616) 00:10:10.788 fused_ordering(617) 00:10:10.788 fused_ordering(618) 00:10:10.788 fused_ordering(619) 00:10:10.788 fused_ordering(620) 00:10:10.788 fused_ordering(621) 00:10:10.788 fused_ordering(622) 00:10:10.788 fused_ordering(623) 00:10:10.788 fused_ordering(624) 00:10:10.788 fused_ordering(625) 00:10:10.788 fused_ordering(626) 00:10:10.788 fused_ordering(627) 00:10:10.788 fused_ordering(628) 00:10:10.788 fused_ordering(629) 00:10:10.788 fused_ordering(630) 00:10:10.788 fused_ordering(631) 00:10:10.788 fused_ordering(632) 00:10:10.788 fused_ordering(633) 00:10:10.788 fused_ordering(634) 00:10:10.788 fused_ordering(635) 00:10:10.788 fused_ordering(636) 00:10:10.788 fused_ordering(637) 00:10:10.788 fused_ordering(638) 00:10:10.788 fused_ordering(639) 00:10:10.788 fused_ordering(640) 00:10:10.788 fused_ordering(641) 00:10:10.788 fused_ordering(642) 00:10:10.788 fused_ordering(643) 00:10:10.788 fused_ordering(644) 00:10:10.788 fused_ordering(645) 00:10:10.788 fused_ordering(646) 00:10:10.788 fused_ordering(647) 00:10:10.788 fused_ordering(648) 00:10:10.788 fused_ordering(649) 00:10:10.788 fused_ordering(650) 00:10:10.788 fused_ordering(651) 00:10:10.788 fused_ordering(652) 00:10:10.788 fused_ordering(653) 00:10:10.788 fused_ordering(654) 00:10:10.788 fused_ordering(655) 00:10:10.788 fused_ordering(656) 00:10:10.788 fused_ordering(657) 00:10:10.788 fused_ordering(658) 00:10:10.788 fused_ordering(659) 00:10:10.788 fused_ordering(660) 00:10:10.788 fused_ordering(661) 00:10:10.788 fused_ordering(662) 00:10:10.788 fused_ordering(663) 00:10:10.788 fused_ordering(664) 00:10:10.788 fused_ordering(665) 00:10:10.788 fused_ordering(666) 00:10:10.788 fused_ordering(667) 00:10:10.788 fused_ordering(668) 00:10:10.788 fused_ordering(669) 00:10:10.788 fused_ordering(670) 00:10:10.788 fused_ordering(671) 00:10:10.788 fused_ordering(672) 00:10:10.788 fused_ordering(673) 00:10:10.788 fused_ordering(674) 00:10:10.788 fused_ordering(675) 00:10:10.788 fused_ordering(676) 00:10:10.788 fused_ordering(677) 00:10:10.788 fused_ordering(678) 00:10:10.788 fused_ordering(679) 00:10:10.788 fused_ordering(680) 00:10:10.788 fused_ordering(681) 00:10:10.788 fused_ordering(682) 00:10:10.788 fused_ordering(683) 00:10:10.788 fused_ordering(684) 00:10:10.788 fused_ordering(685) 00:10:10.788 fused_ordering(686) 00:10:10.788 fused_ordering(687) 00:10:10.788 fused_ordering(688) 00:10:10.788 fused_ordering(689) 00:10:10.788 fused_ordering(690) 00:10:10.788 fused_ordering(691) 00:10:10.788 fused_ordering(692) 00:10:10.788 fused_ordering(693) 00:10:10.788 fused_ordering(694) 00:10:10.788 fused_ordering(695) 00:10:10.788 fused_ordering(696) 00:10:10.788 fused_ordering(697) 00:10:10.788 fused_ordering(698) 00:10:10.788 fused_ordering(699) 00:10:10.788 fused_ordering(700) 00:10:10.788 fused_ordering(701) 00:10:10.788 fused_ordering(702) 00:10:10.788 fused_ordering(703) 00:10:10.788 fused_ordering(704) 00:10:10.788 fused_ordering(705) 00:10:10.788 fused_ordering(706) 00:10:10.788 fused_ordering(707) 00:10:10.788 fused_ordering(708) 00:10:10.788 fused_ordering(709) 00:10:10.788 fused_ordering(710) 00:10:10.788 fused_ordering(711) 00:10:10.788 fused_ordering(712) 00:10:10.788 fused_ordering(713) 00:10:10.788 fused_ordering(714) 00:10:10.788 fused_ordering(715) 00:10:10.788 fused_ordering(716) 00:10:10.788 fused_ordering(717) 00:10:10.788 fused_ordering(718) 00:10:10.788 fused_ordering(719) 00:10:10.788 fused_ordering(720) 00:10:10.788 fused_ordering(721) 00:10:10.788 fused_ordering(722) 00:10:10.788 fused_ordering(723) 00:10:10.788 fused_ordering(724) 00:10:10.789 fused_ordering(725) 00:10:10.789 fused_ordering(726) 00:10:10.789 fused_ordering(727) 00:10:10.789 fused_ordering(728) 00:10:10.789 fused_ordering(729) 00:10:10.789 fused_ordering(730) 00:10:10.789 fused_ordering(731) 00:10:10.789 fused_ordering(732) 00:10:10.789 fused_ordering(733) 00:10:10.789 fused_ordering(734) 00:10:10.789 fused_ordering(735) 00:10:10.789 fused_ordering(736) 00:10:10.789 fused_ordering(737) 00:10:10.789 fused_ordering(738) 00:10:10.789 fused_ordering(739) 00:10:10.789 fused_ordering(740) 00:10:10.789 fused_ordering(741) 00:10:10.789 fused_ordering(742) 00:10:10.789 fused_ordering(743) 00:10:10.789 fused_ordering(744) 00:10:10.789 fused_ordering(745) 00:10:10.789 fused_ordering(746) 00:10:10.789 fused_ordering(747) 00:10:10.789 fused_ordering(748) 00:10:10.789 fused_ordering(749) 00:10:10.789 fused_ordering(750) 00:10:10.789 fused_ordering(751) 00:10:10.789 fused_ordering(752) 00:10:10.789 fused_ordering(753) 00:10:10.789 fused_ordering(754) 00:10:10.789 fused_ordering(755) 00:10:10.789 fused_ordering(756) 00:10:10.789 fused_ordering(757) 00:10:10.789 fused_ordering(758) 00:10:10.789 fused_ordering(759) 00:10:10.789 fused_ordering(760) 00:10:10.789 fused_ordering(761) 00:10:10.789 fused_ordering(762) 00:10:10.789 fused_ordering(763) 00:10:10.789 fused_ordering(764) 00:10:10.789 fused_ordering(765) 00:10:10.789 fused_ordering(766) 00:10:10.789 fused_ordering(767) 00:10:10.789 fused_ordering(768) 00:10:10.789 fused_ordering(769) 00:10:10.789 fused_ordering(770) 00:10:10.789 fused_ordering(771) 00:10:10.789 fused_ordering(772) 00:10:10.789 fused_ordering(773) 00:10:10.789 fused_ordering(774) 00:10:10.789 fused_ordering(775) 00:10:10.789 fused_ordering(776) 00:10:10.789 fused_ordering(777) 00:10:10.789 fused_ordering(778) 00:10:10.789 fused_ordering(779) 00:10:10.789 fused_ordering(780) 00:10:10.789 fused_ordering(781) 00:10:10.789 fused_ordering(782) 00:10:10.789 fused_ordering(783) 00:10:10.789 fused_ordering(784) 00:10:10.789 fused_ordering(785) 00:10:10.789 fused_ordering(786) 00:10:10.789 fused_ordering(787) 00:10:10.789 fused_ordering(788) 00:10:10.789 fused_ordering(789) 00:10:10.789 fused_ordering(790) 00:10:10.789 fused_ordering(791) 00:10:10.789 fused_ordering(792) 00:10:10.789 fused_ordering(793) 00:10:10.789 fused_ordering(794) 00:10:10.789 fused_ordering(795) 00:10:10.789 fused_ordering(796) 00:10:10.789 fused_ordering(797) 00:10:10.789 fused_ordering(798) 00:10:10.789 fused_ordering(799) 00:10:10.789 fused_ordering(800) 00:10:10.789 fused_ordering(801) 00:10:10.789 fused_ordering(802) 00:10:10.789 fused_ordering(803) 00:10:10.789 fused_ordering(804) 00:10:10.789 fused_ordering(805) 00:10:10.789 fused_ordering(806) 00:10:10.789 fused_ordering(807) 00:10:10.789 fused_ordering(808) 00:10:10.789 fused_ordering(809) 00:10:10.789 fused_ordering(810) 00:10:10.789 fused_ordering(811) 00:10:10.789 fused_ordering(812) 00:10:10.789 fused_ordering(813) 00:10:10.789 fused_ordering(814) 00:10:10.789 fused_ordering(815) 00:10:10.789 fused_ordering(816) 00:10:10.789 fused_ordering(817) 00:10:10.789 fused_ordering(818) 00:10:10.789 fused_ordering(819) 00:10:10.789 fused_ordering(820) 00:10:11.354 fused_ordering(821) 00:10:11.354 fused_ordering(822) 00:10:11.354 fused_ordering(823) 00:10:11.354 fused_ordering(824) 00:10:11.354 fused_ordering(825) 00:10:11.354 fused_ordering(826) 00:10:11.354 fused_ordering(827) 00:10:11.354 fused_ordering(828) 00:10:11.354 fused_ordering(829) 00:10:11.354 fused_ordering(830) 00:10:11.354 fused_ordering(831) 00:10:11.354 fused_ordering(832) 00:10:11.354 fused_ordering(833) 00:10:11.354 fused_ordering(834) 00:10:11.354 fused_ordering(835) 00:10:11.354 fused_ordering(836) 00:10:11.354 fused_ordering(837) 00:10:11.354 fused_ordering(838) 00:10:11.354 fused_ordering(839) 00:10:11.354 fused_ordering(840) 00:10:11.354 fused_ordering(841) 00:10:11.354 fused_ordering(842) 00:10:11.354 fused_ordering(843) 00:10:11.354 fused_ordering(844) 00:10:11.354 fused_ordering(845) 00:10:11.354 fused_ordering(846) 00:10:11.354 fused_ordering(847) 00:10:11.354 fused_ordering(848) 00:10:11.354 fused_ordering(849) 00:10:11.354 fused_ordering(850) 00:10:11.354 fused_ordering(851) 00:10:11.354 fused_ordering(852) 00:10:11.354 fused_ordering(853) 00:10:11.354 fused_ordering(854) 00:10:11.354 fused_ordering(855) 00:10:11.354 fused_ordering(856) 00:10:11.354 fused_ordering(857) 00:10:11.354 fused_ordering(858) 00:10:11.354 fused_ordering(859) 00:10:11.354 fused_ordering(860) 00:10:11.354 fused_ordering(861) 00:10:11.354 fused_ordering(862) 00:10:11.354 fused_ordering(863) 00:10:11.354 fused_ordering(864) 00:10:11.354 fused_ordering(865) 00:10:11.354 fused_ordering(866) 00:10:11.354 fused_ordering(867) 00:10:11.354 fused_ordering(868) 00:10:11.354 fused_ordering(869) 00:10:11.354 fused_ordering(870) 00:10:11.354 fused_ordering(871) 00:10:11.354 fused_ordering(872) 00:10:11.354 fused_ordering(873) 00:10:11.354 fused_ordering(874) 00:10:11.354 fused_ordering(875) 00:10:11.354 fused_ordering(876) 00:10:11.354 fused_ordering(877) 00:10:11.354 fused_ordering(878) 00:10:11.354 fused_ordering(879) 00:10:11.354 fused_ordering(880) 00:10:11.354 fused_ordering(881) 00:10:11.354 fused_ordering(882) 00:10:11.354 fused_ordering(883) 00:10:11.354 fused_ordering(884) 00:10:11.354 fused_ordering(885) 00:10:11.354 fused_ordering(886) 00:10:11.354 fused_ordering(887) 00:10:11.354 fused_ordering(888) 00:10:11.354 fused_ordering(889) 00:10:11.354 fused_ordering(890) 00:10:11.354 fused_ordering(891) 00:10:11.354 fused_ordering(892) 00:10:11.354 fused_ordering(893) 00:10:11.354 fused_ordering(894) 00:10:11.354 fused_ordering(895) 00:10:11.354 fused_ordering(896) 00:10:11.354 fused_ordering(897) 00:10:11.354 fused_ordering(898) 00:10:11.354 fused_ordering(899) 00:10:11.354 fused_ordering(900) 00:10:11.354 fused_ordering(901) 00:10:11.354 fused_ordering(902) 00:10:11.354 fused_ordering(903) 00:10:11.354 fused_ordering(904) 00:10:11.354 fused_ordering(905) 00:10:11.354 fused_ordering(906) 00:10:11.354 fused_ordering(907) 00:10:11.354 fused_ordering(908) 00:10:11.354 fused_ordering(909) 00:10:11.354 fused_ordering(910) 00:10:11.354 fused_ordering(911) 00:10:11.354 fused_ordering(912) 00:10:11.354 fused_ordering(913) 00:10:11.354 fused_ordering(914) 00:10:11.354 fused_ordering(915) 00:10:11.354 fused_ordering(916) 00:10:11.354 fused_ordering(917) 00:10:11.354 fused_ordering(918) 00:10:11.354 fused_ordering(919) 00:10:11.354 fused_ordering(920) 00:10:11.354 fused_ordering(921) 00:10:11.354 fused_ordering(922) 00:10:11.354 fused_ordering(923) 00:10:11.354 fused_ordering(924) 00:10:11.354 fused_ordering(925) 00:10:11.354 fused_ordering(926) 00:10:11.354 fused_ordering(927) 00:10:11.354 fused_ordering(928) 00:10:11.354 fused_ordering(929) 00:10:11.354 fused_ordering(930) 00:10:11.354 fused_ordering(931) 00:10:11.354 fused_ordering(932) 00:10:11.354 fused_ordering(933) 00:10:11.354 fused_ordering(934) 00:10:11.354 fused_ordering(935) 00:10:11.354 fused_ordering(936) 00:10:11.354 fused_ordering(937) 00:10:11.354 fused_ordering(938) 00:10:11.354 fused_ordering(939) 00:10:11.354 fused_ordering(940) 00:10:11.354 fused_ordering(941) 00:10:11.354 fused_ordering(942) 00:10:11.354 fused_ordering(943) 00:10:11.354 fused_ordering(944) 00:10:11.354 fused_ordering(945) 00:10:11.354 fused_ordering(946) 00:10:11.354 fused_ordering(947) 00:10:11.354 fused_ordering(948) 00:10:11.354 fused_ordering(949) 00:10:11.354 fused_ordering(950) 00:10:11.354 fused_ordering(951) 00:10:11.354 fused_ordering(952) 00:10:11.354 fused_ordering(953) 00:10:11.354 fused_ordering(954) 00:10:11.354 fused_ordering(955) 00:10:11.354 fused_ordering(956) 00:10:11.354 fused_ordering(957) 00:10:11.354 fused_ordering(958) 00:10:11.354 fused_ordering(959) 00:10:11.354 fused_ordering(960) 00:10:11.354 fused_ordering(961) 00:10:11.354 fused_ordering(962) 00:10:11.354 fused_ordering(963) 00:10:11.354 fused_ordering(964) 00:10:11.354 fused_ordering(965) 00:10:11.354 fused_ordering(966) 00:10:11.354 fused_ordering(967) 00:10:11.354 fused_ordering(968) 00:10:11.354 fused_ordering(969) 00:10:11.354 fused_ordering(970) 00:10:11.354 fused_ordering(971) 00:10:11.354 fused_ordering(972) 00:10:11.354 fused_ordering(973) 00:10:11.354 fused_ordering(974) 00:10:11.354 fused_ordering(975) 00:10:11.354 fused_ordering(976) 00:10:11.354 fused_ordering(977) 00:10:11.354 fused_ordering(978) 00:10:11.354 fused_ordering(979) 00:10:11.354 fused_ordering(980) 00:10:11.354 fused_ordering(981) 00:10:11.354 fused_ordering(982) 00:10:11.354 fused_ordering(983) 00:10:11.354 fused_ordering(984) 00:10:11.354 fused_ordering(985) 00:10:11.354 fused_ordering(986) 00:10:11.354 fused_ordering(987) 00:10:11.354 fused_ordering(988) 00:10:11.354 fused_ordering(989) 00:10:11.354 fused_ordering(990) 00:10:11.354 fused_ordering(991) 00:10:11.354 fused_ordering(992) 00:10:11.354 fused_ordering(993) 00:10:11.354 fused_ordering(994) 00:10:11.354 fused_ordering(995) 00:10:11.354 fused_ordering(996) 00:10:11.354 fused_ordering(997) 00:10:11.354 fused_ordering(998) 00:10:11.354 fused_ordering(999) 00:10:11.354 fused_ordering(1000) 00:10:11.354 fused_ordering(1001) 00:10:11.354 fused_ordering(1002) 00:10:11.354 fused_ordering(1003) 00:10:11.355 fused_ordering(1004) 00:10:11.355 fused_ordering(1005) 00:10:11.355 fused_ordering(1006) 00:10:11.355 fused_ordering(1007) 00:10:11.355 fused_ordering(1008) 00:10:11.355 fused_ordering(1009) 00:10:11.355 fused_ordering(1010) 00:10:11.355 fused_ordering(1011) 00:10:11.355 fused_ordering(1012) 00:10:11.355 fused_ordering(1013) 00:10:11.355 fused_ordering(1014) 00:10:11.355 fused_ordering(1015) 00:10:11.355 fused_ordering(1016) 00:10:11.355 fused_ordering(1017) 00:10:11.355 fused_ordering(1018) 00:10:11.355 fused_ordering(1019) 00:10:11.355 fused_ordering(1020) 00:10:11.355 fused_ordering(1021) 00:10:11.355 fused_ordering(1022) 00:10:11.355 fused_ordering(1023) 00:10:11.355 00:44:04 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:11.355 00:44:04 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:11.355 00:44:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:11.355 00:44:04 -- nvmf/common.sh@117 -- # sync 00:10:11.355 00:44:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.355 00:44:04 -- nvmf/common.sh@120 -- # set +e 00:10:11.355 00:44:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.355 00:44:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.355 rmmod nvme_tcp 00:10:11.613 rmmod nvme_fabrics 00:10:11.613 rmmod nvme_keyring 00:10:11.613 00:44:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.613 00:44:04 -- nvmf/common.sh@124 -- # set -e 00:10:11.613 00:44:04 -- nvmf/common.sh@125 -- # return 0 00:10:11.613 00:44:04 -- nvmf/common.sh@478 -- # '[' -n 1595439 ']' 00:10:11.613 00:44:04 -- nvmf/common.sh@479 -- # killprocess 1595439 00:10:11.613 00:44:04 -- common/autotest_common.sh@936 -- # '[' -z 1595439 ']' 00:10:11.613 00:44:04 -- common/autotest_common.sh@940 -- # kill -0 1595439 00:10:11.613 00:44:04 -- common/autotest_common.sh@941 -- # uname 00:10:11.613 00:44:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:11.613 00:44:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1595439 00:10:11.613 00:44:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:11.613 00:44:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:11.613 00:44:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1595439' 00:10:11.613 killing process with pid 1595439 00:10:11.613 00:44:04 -- common/autotest_common.sh@955 -- # kill 1595439 00:10:11.613 00:44:04 -- common/autotest_common.sh@960 -- # wait 1595439 00:10:11.872 00:44:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:11.872 00:44:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:11.872 00:44:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:11.872 00:44:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.872 00:44:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.872 00:44:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.872 00:44:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.872 00:44:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.775 00:44:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:13.776 00:10:13.776 real 0m12.903s 00:10:13.776 user 0m8.408s 00:10:13.776 sys 0m7.167s 00:10:13.776 00:44:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:13.776 00:44:06 -- common/autotest_common.sh@10 -- # set +x 00:10:13.776 ************************************ 00:10:13.776 END TEST nvmf_fused_ordering 00:10:13.776 ************************************ 00:10:13.776 00:44:06 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:13.776 00:44:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:13.776 00:44:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:13.776 00:44:06 -- common/autotest_common.sh@10 -- # set +x 00:10:14.033 ************************************ 00:10:14.033 START TEST nvmf_delete_subsystem 00:10:14.033 ************************************ 00:10:14.033 00:44:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:14.033 * Looking for test storage... 00:10:14.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.033 00:44:06 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.033 00:44:06 -- nvmf/common.sh@7 -- # uname -s 00:10:14.033 00:44:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.033 00:44:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.033 00:44:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.033 00:44:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.033 00:44:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.033 00:44:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.033 00:44:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.034 00:44:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.034 00:44:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.034 00:44:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.034 00:44:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:14.034 00:44:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:14.034 00:44:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.034 00:44:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.034 00:44:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.034 00:44:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.034 00:44:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.034 00:44:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.034 00:44:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.034 00:44:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.034 00:44:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.034 00:44:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.034 00:44:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.034 00:44:06 -- paths/export.sh@5 -- # export PATH 00:10:14.034 00:44:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.034 00:44:06 -- nvmf/common.sh@47 -- # : 0 00:10:14.034 00:44:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.034 00:44:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.034 00:44:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.034 00:44:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.034 00:44:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.034 00:44:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.034 00:44:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.034 00:44:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.034 00:44:06 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:14.034 00:44:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:14.034 00:44:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.034 00:44:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:14.034 00:44:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:14.034 00:44:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:14.034 00:44:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.034 00:44:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.034 00:44:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.034 00:44:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:14.034 00:44:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:14.034 00:44:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.034 00:44:06 -- common/autotest_common.sh@10 -- # set +x 00:10:19.300 00:44:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:19.300 00:44:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:19.300 00:44:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:19.300 00:44:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:19.300 00:44:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:19.300 00:44:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:19.300 00:44:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:19.300 00:44:11 -- nvmf/common.sh@295 -- # net_devs=() 00:10:19.300 00:44:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:19.300 00:44:11 -- nvmf/common.sh@296 -- # e810=() 00:10:19.300 00:44:11 -- nvmf/common.sh@296 -- # local -ga e810 00:10:19.300 00:44:11 -- nvmf/common.sh@297 -- # x722=() 00:10:19.300 00:44:11 -- nvmf/common.sh@297 -- # local -ga x722 00:10:19.300 00:44:11 -- nvmf/common.sh@298 -- # mlx=() 00:10:19.300 00:44:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:19.300 00:44:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.300 00:44:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:19.300 00:44:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:19.300 00:44:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:19.300 00:44:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:19.300 00:44:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:19.301 00:44:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:19.301 00:44:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.301 00:44:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:19.301 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:19.301 00:44:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.301 00:44:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:19.301 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:19.301 00:44:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:19.301 00:44:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.301 00:44:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.301 00:44:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:19.301 00:44:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.301 00:44:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:19.301 Found net devices under 0000:86:00.0: cvl_0_0 00:10:19.301 00:44:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.301 00:44:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.301 00:44:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.301 00:44:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:19.301 00:44:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.301 00:44:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:19.301 Found net devices under 0000:86:00.1: cvl_0_1 00:10:19.301 00:44:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.301 00:44:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:19.301 00:44:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:19.301 00:44:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:19.301 00:44:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.301 00:44:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.301 00:44:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.301 00:44:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:19.301 00:44:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.301 00:44:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.301 00:44:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:19.301 00:44:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.301 00:44:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.301 00:44:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:19.301 00:44:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:19.301 00:44:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.301 00:44:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.301 00:44:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.301 00:44:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.301 00:44:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:19.301 00:44:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.301 00:44:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.301 00:44:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.301 00:44:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:19.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:10:19.301 00:10:19.301 --- 10.0.0.2 ping statistics --- 00:10:19.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.301 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:19.301 00:44:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:10:19.301 00:10:19.301 --- 10.0.0.1 ping statistics --- 00:10:19.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.301 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:10:19.301 00:44:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.301 00:44:11 -- nvmf/common.sh@411 -- # return 0 00:10:19.301 00:44:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:19.301 00:44:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.301 00:44:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:19.301 00:44:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.301 00:44:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:19.301 00:44:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:19.301 00:44:11 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:19.301 00:44:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:19.301 00:44:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:19.301 00:44:11 -- common/autotest_common.sh@10 -- # set +x 00:10:19.301 00:44:11 -- nvmf/common.sh@470 -- # nvmfpid=1599794 00:10:19.301 00:44:11 -- nvmf/common.sh@471 -- # waitforlisten 1599794 00:10:19.301 00:44:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:19.301 00:44:11 -- common/autotest_common.sh@817 -- # '[' -z 1599794 ']' 00:10:19.301 00:44:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.301 00:44:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:19.301 00:44:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.301 00:44:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:19.301 00:44:11 -- common/autotest_common.sh@10 -- # set +x 00:10:19.301 [2024-04-27 00:44:11.832970] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:10:19.301 [2024-04-27 00:44:11.833011] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.301 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.301 [2024-04-27 00:44:11.891693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:19.301 [2024-04-27 00:44:11.963277] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.301 [2024-04-27 00:44:11.963322] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.301 [2024-04-27 00:44:11.963329] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.301 [2024-04-27 00:44:11.963335] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.301 [2024-04-27 00:44:11.963339] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.301 [2024-04-27 00:44:11.963387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.301 [2024-04-27 00:44:11.963389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.232 00:44:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:20.232 00:44:12 -- common/autotest_common.sh@850 -- # return 0 00:10:20.232 00:44:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:20.232 00:44:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:20.232 00:44:12 -- common/autotest_common.sh@10 -- # set +x 00:10:20.232 00:44:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.232 00:44:12 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.232 00:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.232 00:44:12 -- common/autotest_common.sh@10 -- # set +x 00:10:20.232 [2024-04-27 00:44:12.664674] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.232 00:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.232 00:44:12 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:20.232 00:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.232 00:44:12 -- common/autotest_common.sh@10 -- # set +x 00:10:20.232 00:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.232 00:44:12 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.232 00:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.232 00:44:12 -- common/autotest_common.sh@10 -- # set +x 00:10:20.232 [2024-04-27 00:44:12.680837] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.232 00:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.232 00:44:12 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:20.232 00:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.232 00:44:12 -- common/autotest_common.sh@10 -- # set +x 00:10:20.232 NULL1 00:10:20.232 00:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.232 00:44:12 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:20.232 00:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.232 00:44:12 -- common/autotest_common.sh@10 -- # set +x 00:10:20.232 Delay0 00:10:20.232 00:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.232 00:44:12 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.232 00:44:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:20.232 00:44:12 -- common/autotest_common.sh@10 -- # set +x 00:10:20.232 00:44:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:20.232 00:44:12 -- target/delete_subsystem.sh@28 -- # perf_pid=1599921 00:10:20.232 00:44:12 -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:20.232 00:44:12 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:20.232 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.232 [2024-04-27 00:44:12.755456] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:22.130 00:44:14 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:22.130 00:44:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:22.130 00:44:14 -- common/autotest_common.sh@10 -- # set +x 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 Write completed with error (sct=0, sc=8) 00:10:22.130 starting I/O failed: -6 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 [2024-04-27 00:44:14.805202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc97590 is same with the state(5) to be set 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.130 Read completed with error (sct=0, sc=8) 00:10:22.131 starting I/O failed: -6 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 starting I/O failed: -6 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 starting I/O failed: -6 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 starting I/O failed: -6 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 starting I/O failed: -6 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 starting I/O failed: -6 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 starting I/O failed: -6 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 starting I/O failed: -6 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 starting I/O failed: -6 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 [2024-04-27 00:44:14.806684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1dac00c510 is same with the state(5) to be set 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Read completed with error (sct=0, sc=8) 00:10:22.131 Write completed with error (sct=0, sc=8) 00:10:23.083 [2024-04-27 00:44:15.772570] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc96140 is same with the state(5) to be set 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 [2024-04-27 00:44:15.807574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1dac00c250 is same with the state(5) to be set 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 [2024-04-27 00:44:15.809722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc97110 is same with the state(5) to be set 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 [2024-04-27 00:44:15.809909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc97400 is same with the state(5) to be set 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Write completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 Read completed with error (sct=0, sc=8) 00:10:23.353 [2024-04-27 00:44:15.810043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc97850 is same with the state(5) to be set 00:10:23.353 [2024-04-27 00:44:15.810606] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc96140 (9): Bad file descriptor 00:10:23.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:23.353 00:44:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.354 00:44:15 -- target/delete_subsystem.sh@34 -- # delay=0 00:10:23.354 00:44:15 -- target/delete_subsystem.sh@35 -- # kill -0 1599921 00:10:23.354 Initializing NVMe Controllers 00:10:23.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:23.354 Controller IO queue size 128, less than required. 00:10:23.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:23.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:23.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:23.354 Initialization complete. Launching workers. 00:10:23.354 ======================================================== 00:10:23.354 Latency(us) 00:10:23.354 Device Information : IOPS MiB/s Average min max 00:10:23.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.22 0.09 952097.89 586.11 1012425.14 00:10:23.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.45 0.07 883112.23 215.46 1013651.98 00:10:23.354 ======================================================== 00:10:23.354 Total : 340.68 0.17 921024.17 215.46 1013651.98 00:10:23.354 00:10:23.354 00:44:15 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@35 -- # kill -0 1599921 00:10:23.929 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1599921) - No such process 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@45 -- # NOT wait 1599921 00:10:23.929 00:44:16 -- common/autotest_common.sh@638 -- # local es=0 00:10:23.929 00:44:16 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 1599921 00:10:23.929 00:44:16 -- common/autotest_common.sh@626 -- # local arg=wait 00:10:23.929 00:44:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:23.929 00:44:16 -- common/autotest_common.sh@630 -- # type -t wait 00:10:23.929 00:44:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:23.929 00:44:16 -- common/autotest_common.sh@641 -- # wait 1599921 00:10:23.929 00:44:16 -- common/autotest_common.sh@641 -- # es=1 00:10:23.929 00:44:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:23.929 00:44:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:23.929 00:44:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:23.929 00:44:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.929 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:23.929 00:44:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.929 00:44:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.929 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:23.929 [2024-04-27 00:44:16.339266] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.929 00:44:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.929 00:44:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:23.929 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:23.929 00:44:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@54 -- # perf_pid=1600612 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@56 -- # delay=0 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@57 -- # kill -0 1600612 00:10:23.929 00:44:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:23.929 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.929 [2024-04-27 00:44:16.396236] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:24.188 00:44:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:24.188 00:44:16 -- target/delete_subsystem.sh@57 -- # kill -0 1600612 00:10:24.188 00:44:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:24.754 00:44:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:24.754 00:44:17 -- target/delete_subsystem.sh@57 -- # kill -0 1600612 00:10:24.754 00:44:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:25.321 00:44:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:25.321 00:44:17 -- target/delete_subsystem.sh@57 -- # kill -0 1600612 00:10:25.321 00:44:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:25.886 00:44:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:25.886 00:44:18 -- target/delete_subsystem.sh@57 -- # kill -0 1600612 00:10:25.886 00:44:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:26.453 00:44:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:26.453 00:44:18 -- target/delete_subsystem.sh@57 -- # kill -0 1600612 00:10:26.453 00:44:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:26.710 00:44:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:26.710 00:44:19 -- target/delete_subsystem.sh@57 -- # kill -0 1600612 00:10:26.710 00:44:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:27.276 Initializing NVMe Controllers 00:10:27.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:27.276 Controller IO queue size 128, less than required. 00:10:27.276 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:27.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:27.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:27.276 Initialization complete. Launching workers. 00:10:27.276 ======================================================== 00:10:27.276 Latency(us) 00:10:27.276 Device Information : IOPS MiB/s Average min max 00:10:27.276 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003599.27 1000186.02 1042123.81 00:10:27.276 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005689.44 1000509.20 1042822.21 00:10:27.276 ======================================================== 00:10:27.276 Total : 256.00 0.12 1004644.36 1000186.02 1042822.21 00:10:27.276 00:10:27.276 00:44:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:27.276 00:44:19 -- target/delete_subsystem.sh@57 -- # kill -0 1600612 00:10:27.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1600612) - No such process 00:10:27.276 00:44:19 -- target/delete_subsystem.sh@67 -- # wait 1600612 00:10:27.276 00:44:19 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:27.276 00:44:19 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:27.276 00:44:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:27.276 00:44:19 -- nvmf/common.sh@117 -- # sync 00:10:27.276 00:44:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.276 00:44:19 -- nvmf/common.sh@120 -- # set +e 00:10:27.276 00:44:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.276 00:44:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.277 rmmod nvme_tcp 00:10:27.277 rmmod nvme_fabrics 00:10:27.277 rmmod nvme_keyring 00:10:27.277 00:44:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.277 00:44:19 -- nvmf/common.sh@124 -- # set -e 00:10:27.277 00:44:19 -- nvmf/common.sh@125 -- # return 0 00:10:27.277 00:44:19 -- nvmf/common.sh@478 -- # '[' -n 1599794 ']' 00:10:27.277 00:44:19 -- nvmf/common.sh@479 -- # killprocess 1599794 00:10:27.277 00:44:19 -- common/autotest_common.sh@936 -- # '[' -z 1599794 ']' 00:10:27.277 00:44:19 -- common/autotest_common.sh@940 -- # kill -0 1599794 00:10:27.277 00:44:19 -- common/autotest_common.sh@941 -- # uname 00:10:27.277 00:44:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:27.277 00:44:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1599794 00:10:27.537 00:44:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:27.537 00:44:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:27.537 00:44:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1599794' 00:10:27.537 killing process with pid 1599794 00:10:27.537 00:44:19 -- common/autotest_common.sh@955 -- # kill 1599794 00:10:27.537 00:44:19 -- common/autotest_common.sh@960 -- # wait 1599794 00:10:27.537 00:44:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:27.537 00:44:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:27.537 00:44:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:27.537 00:44:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.537 00:44:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.537 00:44:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.537 00:44:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.537 00:44:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.074 00:44:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:30.074 00:10:30.074 real 0m15.720s 00:10:30.074 user 0m30.079s 00:10:30.074 sys 0m4.739s 00:10:30.074 00:44:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:30.074 00:44:22 -- common/autotest_common.sh@10 -- # set +x 00:10:30.074 ************************************ 00:10:30.074 END TEST nvmf_delete_subsystem 00:10:30.074 ************************************ 00:10:30.074 00:44:22 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:30.074 00:44:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:30.074 00:44:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:30.074 00:44:22 -- common/autotest_common.sh@10 -- # set +x 00:10:30.074 ************************************ 00:10:30.074 START TEST nvmf_ns_masking 00:10:30.074 ************************************ 00:10:30.074 00:44:22 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:30.074 * Looking for test storage... 00:10:30.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.074 00:44:22 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.074 00:44:22 -- nvmf/common.sh@7 -- # uname -s 00:10:30.074 00:44:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.074 00:44:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.074 00:44:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.074 00:44:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.074 00:44:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.074 00:44:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.074 00:44:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.074 00:44:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.074 00:44:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.074 00:44:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.074 00:44:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:30.074 00:44:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:30.074 00:44:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.074 00:44:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.074 00:44:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.074 00:44:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.074 00:44:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.074 00:44:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.074 00:44:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.074 00:44:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.074 00:44:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.074 00:44:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.074 00:44:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.074 00:44:22 -- paths/export.sh@5 -- # export PATH 00:10:30.074 00:44:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.074 00:44:22 -- nvmf/common.sh@47 -- # : 0 00:10:30.074 00:44:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.074 00:44:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.074 00:44:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.074 00:44:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.074 00:44:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.074 00:44:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.074 00:44:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.074 00:44:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.074 00:44:22 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:30.074 00:44:22 -- target/ns_masking.sh@11 -- # loops=5 00:10:30.074 00:44:22 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:30.074 00:44:22 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:30.074 00:44:22 -- target/ns_masking.sh@15 -- # uuidgen 00:10:30.074 00:44:22 -- target/ns_masking.sh@15 -- # HOSTID=3d001d22-fc51-4103-bdd5-b0a63b5e250f 00:10:30.074 00:44:22 -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:30.074 00:44:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:30.074 00:44:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.074 00:44:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:30.074 00:44:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:30.074 00:44:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:30.074 00:44:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.074 00:44:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.074 00:44:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.074 00:44:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:30.074 00:44:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:30.074 00:44:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.074 00:44:22 -- common/autotest_common.sh@10 -- # set +x 00:10:35.350 00:44:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:35.350 00:44:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:35.350 00:44:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:35.350 00:44:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:35.350 00:44:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:35.350 00:44:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:35.350 00:44:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:35.350 00:44:27 -- nvmf/common.sh@295 -- # net_devs=() 00:10:35.350 00:44:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:35.350 00:44:27 -- nvmf/common.sh@296 -- # e810=() 00:10:35.350 00:44:27 -- nvmf/common.sh@296 -- # local -ga e810 00:10:35.350 00:44:27 -- nvmf/common.sh@297 -- # x722=() 00:10:35.350 00:44:27 -- nvmf/common.sh@297 -- # local -ga x722 00:10:35.350 00:44:27 -- nvmf/common.sh@298 -- # mlx=() 00:10:35.350 00:44:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:35.350 00:44:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.350 00:44:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:35.350 00:44:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:35.350 00:44:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:35.350 00:44:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:35.350 00:44:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:35.350 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:35.350 00:44:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:35.350 00:44:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:35.350 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:35.350 00:44:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:35.350 00:44:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:35.350 00:44:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.350 00:44:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:35.350 00:44:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.350 00:44:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:35.350 Found net devices under 0000:86:00.0: cvl_0_0 00:10:35.350 00:44:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.350 00:44:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:35.350 00:44:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.350 00:44:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:35.350 00:44:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.350 00:44:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:35.350 Found net devices under 0000:86:00.1: cvl_0_1 00:10:35.350 00:44:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.350 00:44:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:35.350 00:44:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:35.350 00:44:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:35.350 00:44:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:35.350 00:44:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.350 00:44:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.350 00:44:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:35.350 00:44:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:35.350 00:44:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:35.350 00:44:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:35.350 00:44:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:35.350 00:44:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:35.350 00:44:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.350 00:44:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:35.350 00:44:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:35.350 00:44:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:35.350 00:44:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.350 00:44:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.350 00:44:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.350 00:44:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:35.350 00:44:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.350 00:44:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.351 00:44:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.351 00:44:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:35.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:10:35.351 00:10:35.351 --- 10.0.0.2 ping statistics --- 00:10:35.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.351 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:10:35.351 00:44:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:10:35.351 00:10:35.351 --- 10.0.0.1 ping statistics --- 00:10:35.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.351 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:10:35.351 00:44:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.351 00:44:27 -- nvmf/common.sh@411 -- # return 0 00:10:35.351 00:44:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:35.351 00:44:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.351 00:44:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:35.351 00:44:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:35.351 00:44:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.351 00:44:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:35.351 00:44:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:35.351 00:44:27 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:10:35.351 00:44:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:35.351 00:44:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:35.351 00:44:27 -- common/autotest_common.sh@10 -- # set +x 00:10:35.351 00:44:27 -- nvmf/common.sh@470 -- # nvmfpid=1604607 00:10:35.351 00:44:27 -- nvmf/common.sh@471 -- # waitforlisten 1604607 00:10:35.351 00:44:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.351 00:44:27 -- common/autotest_common.sh@817 -- # '[' -z 1604607 ']' 00:10:35.351 00:44:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.351 00:44:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:35.351 00:44:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.351 00:44:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:35.351 00:44:27 -- common/autotest_common.sh@10 -- # set +x 00:10:35.351 [2024-04-27 00:44:27.972288] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:10:35.351 [2024-04-27 00:44:27.972330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.351 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.351 [2024-04-27 00:44:28.028091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.609 [2024-04-27 00:44:28.108918] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.610 [2024-04-27 00:44:28.108954] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.610 [2024-04-27 00:44:28.108961] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.610 [2024-04-27 00:44:28.108967] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.610 [2024-04-27 00:44:28.108972] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.610 [2024-04-27 00:44:28.109013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.610 [2024-04-27 00:44:28.109114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.610 [2024-04-27 00:44:28.109138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.610 [2024-04-27 00:44:28.109139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.177 00:44:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:36.177 00:44:28 -- common/autotest_common.sh@850 -- # return 0 00:10:36.177 00:44:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:36.177 00:44:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:36.177 00:44:28 -- common/autotest_common.sh@10 -- # set +x 00:10:36.177 00:44:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.177 00:44:28 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:36.436 [2024-04-27 00:44:28.966561] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.436 00:44:28 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:10:36.436 00:44:28 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:10:36.436 00:44:28 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:36.695 Malloc1 00:10:36.695 00:44:29 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:36.695 Malloc2 00:10:36.695 00:44:29 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:36.953 00:44:29 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:37.212 00:44:29 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.212 [2024-04-27 00:44:29.874485] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.212 00:44:29 -- target/ns_masking.sh@61 -- # connect 00:10:37.212 00:44:29 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3d001d22-fc51-4103-bdd5-b0a63b5e250f -a 10.0.0.2 -s 4420 -i 4 00:10:37.471 00:44:30 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:10:37.471 00:44:30 -- common/autotest_common.sh@1184 -- # local i=0 00:10:37.471 00:44:30 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:37.471 00:44:30 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:37.471 00:44:30 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:40.006 00:44:32 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:40.006 00:44:32 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:40.006 00:44:32 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.006 00:44:32 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:40.006 00:44:32 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.006 00:44:32 -- common/autotest_common.sh@1194 -- # return 0 00:10:40.006 00:44:32 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:40.006 00:44:32 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:40.006 00:44:32 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:40.006 00:44:32 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:40.006 00:44:32 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:10:40.006 00:44:32 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:40.006 00:44:32 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:40.006 [ 0]:0x1 00:10:40.006 00:44:32 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:40.006 00:44:32 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:40.006 00:44:32 -- target/ns_masking.sh@40 -- # nguid=56ae85e530d2465286baffe1cb771d27 00:10:40.006 00:44:32 -- target/ns_masking.sh@41 -- # [[ 56ae85e530d2465286baffe1cb771d27 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:40.006 00:44:32 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:40.006 00:44:32 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:10:40.006 00:44:32 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:40.006 00:44:32 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:40.006 [ 0]:0x1 00:10:40.006 00:44:32 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:40.006 00:44:32 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:40.006 00:44:32 -- target/ns_masking.sh@40 -- # nguid=56ae85e530d2465286baffe1cb771d27 00:10:40.006 00:44:32 -- target/ns_masking.sh@41 -- # [[ 56ae85e530d2465286baffe1cb771d27 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:40.006 00:44:32 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:10:40.006 00:44:32 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:40.006 00:44:32 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:40.006 [ 1]:0x2 00:10:40.006 00:44:32 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:40.006 00:44:32 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:40.006 00:44:32 -- target/ns_masking.sh@40 -- # nguid=79de5608a18449b0918772b5a0f6e006 00:10:40.006 00:44:32 -- target/ns_masking.sh@41 -- # [[ 79de5608a18449b0918772b5a0f6e006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:40.006 00:44:32 -- target/ns_masking.sh@69 -- # disconnect 00:10:40.006 00:44:32 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.006 00:44:32 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.265 00:44:32 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:40.522 00:44:33 -- target/ns_masking.sh@77 -- # connect 1 00:10:40.522 00:44:33 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3d001d22-fc51-4103-bdd5-b0a63b5e250f -a 10.0.0.2 -s 4420 -i 4 00:10:40.522 00:44:33 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:40.522 00:44:33 -- common/autotest_common.sh@1184 -- # local i=0 00:10:40.522 00:44:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.522 00:44:33 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:10:40.522 00:44:33 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:10:40.522 00:44:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:43.053 00:44:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:43.053 00:44:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:43.053 00:44:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.053 00:44:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:43.053 00:44:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.053 00:44:35 -- common/autotest_common.sh@1194 -- # return 0 00:10:43.053 00:44:35 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:43.053 00:44:35 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:43.053 00:44:35 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:43.053 00:44:35 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:43.053 00:44:35 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:10:43.053 00:44:35 -- common/autotest_common.sh@638 -- # local es=0 00:10:43.053 00:44:35 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:43.053 00:44:35 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:43.053 00:44:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:43.053 00:44:35 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:43.053 00:44:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:43.053 00:44:35 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:43.053 00:44:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:43.053 00:44:35 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:43.053 00:44:35 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:43.053 00:44:35 -- common/autotest_common.sh@641 -- # es=1 00:10:43.053 00:44:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:43.053 00:44:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:43.053 00:44:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:43.053 00:44:35 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:10:43.053 00:44:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:43.053 00:44:35 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:43.053 [ 0]:0x2 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # nguid=79de5608a18449b0918772b5a0f6e006 00:10:43.053 00:44:35 -- target/ns_masking.sh@41 -- # [[ 79de5608a18449b0918772b5a0f6e006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:43.053 00:44:35 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:43.053 00:44:35 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:10:43.053 00:44:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:43.053 00:44:35 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:43.053 [ 0]:0x1 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # nguid=56ae85e530d2465286baffe1cb771d27 00:10:43.053 00:44:35 -- target/ns_masking.sh@41 -- # [[ 56ae85e530d2465286baffe1cb771d27 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:43.053 00:44:35 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:10:43.053 00:44:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:43.053 00:44:35 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:43.053 [ 1]:0x2 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:43.053 00:44:35 -- target/ns_masking.sh@40 -- # nguid=79de5608a18449b0918772b5a0f6e006 00:10:43.053 00:44:35 -- target/ns_masking.sh@41 -- # [[ 79de5608a18449b0918772b5a0f6e006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:43.053 00:44:35 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:43.313 00:44:35 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:10:43.313 00:44:35 -- common/autotest_common.sh@638 -- # local es=0 00:10:43.313 00:44:35 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:43.313 00:44:35 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:43.313 00:44:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:43.313 00:44:35 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:43.313 00:44:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:43.313 00:44:35 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:43.313 00:44:35 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:43.313 00:44:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:43.313 00:44:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:43.313 00:44:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:43.313 00:44:35 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:43.313 00:44:35 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:43.313 00:44:35 -- common/autotest_common.sh@641 -- # es=1 00:10:43.313 00:44:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:43.313 00:44:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:43.313 00:44:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:43.313 00:44:35 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:10:43.313 00:44:35 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:43.313 00:44:35 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:43.313 [ 0]:0x2 00:10:43.313 00:44:35 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:43.313 00:44:35 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:43.313 00:44:35 -- target/ns_masking.sh@40 -- # nguid=79de5608a18449b0918772b5a0f6e006 00:10:43.313 00:44:35 -- target/ns_masking.sh@41 -- # [[ 79de5608a18449b0918772b5a0f6e006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:43.313 00:44:35 -- target/ns_masking.sh@91 -- # disconnect 00:10:43.313 00:44:35 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.313 00:44:35 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:43.572 00:44:36 -- target/ns_masking.sh@95 -- # connect 2 00:10:43.572 00:44:36 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3d001d22-fc51-4103-bdd5-b0a63b5e250f -a 10.0.0.2 -s 4420 -i 4 00:10:43.572 00:44:36 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:43.572 00:44:36 -- common/autotest_common.sh@1184 -- # local i=0 00:10:43.572 00:44:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.572 00:44:36 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:10:43.572 00:44:36 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:10:43.572 00:44:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:46.108 00:44:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:46.108 00:44:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:46.108 00:44:38 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.108 00:44:38 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:10:46.108 00:44:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.108 00:44:38 -- common/autotest_common.sh@1194 -- # return 0 00:10:46.108 00:44:38 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:46.108 00:44:38 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:46.108 00:44:38 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:46.108 00:44:38 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:46.108 00:44:38 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:10:46.108 00:44:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:46.108 00:44:38 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:46.108 [ 0]:0x1 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # nguid=56ae85e530d2465286baffe1cb771d27 00:10:46.108 00:44:38 -- target/ns_masking.sh@41 -- # [[ 56ae85e530d2465286baffe1cb771d27 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:46.108 00:44:38 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:10:46.108 00:44:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:46.108 00:44:38 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:46.108 [ 1]:0x2 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # nguid=79de5608a18449b0918772b5a0f6e006 00:10:46.108 00:44:38 -- target/ns_masking.sh@41 -- # [[ 79de5608a18449b0918772b5a0f6e006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:46.108 00:44:38 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:46.108 00:44:38 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:10:46.108 00:44:38 -- common/autotest_common.sh@638 -- # local es=0 00:10:46.108 00:44:38 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:46.108 00:44:38 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:46.108 00:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:46.108 00:44:38 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:46.108 00:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:46.108 00:44:38 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:46.108 00:44:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:46.108 00:44:38 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:46.108 00:44:38 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:46.108 00:44:38 -- common/autotest_common.sh@641 -- # es=1 00:10:46.108 00:44:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:46.108 00:44:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:46.108 00:44:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:46.108 00:44:38 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:10:46.108 00:44:38 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:46.108 00:44:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:46.108 [ 0]:0x2 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:46.108 00:44:38 -- target/ns_masking.sh@40 -- # nguid=79de5608a18449b0918772b5a0f6e006 00:10:46.108 00:44:38 -- target/ns_masking.sh@41 -- # [[ 79de5608a18449b0918772b5a0f6e006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:46.108 00:44:38 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:46.108 00:44:38 -- common/autotest_common.sh@638 -- # local es=0 00:10:46.108 00:44:38 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:46.108 00:44:38 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.108 00:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:46.108 00:44:38 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.108 00:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:46.108 00:44:38 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.108 00:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:46.108 00:44:38 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:46.108 00:44:38 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:46.108 00:44:38 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:46.366 [2024-04-27 00:44:38.915890] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:46.367 request: 00:10:46.367 { 00:10:46.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.367 "nsid": 2, 00:10:46.367 "host": "nqn.2016-06.io.spdk:host1", 00:10:46.367 "method": "nvmf_ns_remove_host", 00:10:46.367 "req_id": 1 00:10:46.367 } 00:10:46.367 Got JSON-RPC error response 00:10:46.367 response: 00:10:46.367 { 00:10:46.367 "code": -32602, 00:10:46.367 "message": "Invalid parameters" 00:10:46.367 } 00:10:46.367 00:44:38 -- common/autotest_common.sh@641 -- # es=1 00:10:46.367 00:44:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:46.367 00:44:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:46.367 00:44:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:46.367 00:44:38 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:10:46.367 00:44:38 -- common/autotest_common.sh@638 -- # local es=0 00:10:46.367 00:44:38 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:10:46.367 00:44:38 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:10:46.367 00:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:46.367 00:44:38 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:10:46.367 00:44:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:46.367 00:44:38 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:10:46.367 00:44:38 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:46.367 00:44:38 -- target/ns_masking.sh@39 -- # grep 0x1 00:10:46.367 00:44:38 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:46.367 00:44:38 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:46.367 00:44:39 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:46.367 00:44:39 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:46.367 00:44:39 -- common/autotest_common.sh@641 -- # es=1 00:10:46.367 00:44:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:46.367 00:44:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:46.367 00:44:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:46.367 00:44:39 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:10:46.367 00:44:39 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:46.367 00:44:39 -- target/ns_masking.sh@39 -- # grep 0x2 00:10:46.367 [ 0]:0x2 00:10:46.367 00:44:39 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:46.367 00:44:39 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:46.625 00:44:39 -- target/ns_masking.sh@40 -- # nguid=79de5608a18449b0918772b5a0f6e006 00:10:46.625 00:44:39 -- target/ns_masking.sh@41 -- # [[ 79de5608a18449b0918772b5a0f6e006 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:46.625 00:44:39 -- target/ns_masking.sh@108 -- # disconnect 00:10:46.625 00:44:39 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.625 00:44:39 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.884 00:44:39 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:10:46.884 00:44:39 -- target/ns_masking.sh@114 -- # nvmftestfini 00:10:46.884 00:44:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:46.884 00:44:39 -- nvmf/common.sh@117 -- # sync 00:10:46.884 00:44:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.884 00:44:39 -- nvmf/common.sh@120 -- # set +e 00:10:46.884 00:44:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.884 00:44:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.884 rmmod nvme_tcp 00:10:46.884 rmmod nvme_fabrics 00:10:46.884 rmmod nvme_keyring 00:10:46.884 00:44:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.884 00:44:39 -- nvmf/common.sh@124 -- # set -e 00:10:46.884 00:44:39 -- nvmf/common.sh@125 -- # return 0 00:10:46.884 00:44:39 -- nvmf/common.sh@478 -- # '[' -n 1604607 ']' 00:10:46.884 00:44:39 -- nvmf/common.sh@479 -- # killprocess 1604607 00:10:46.884 00:44:39 -- common/autotest_common.sh@936 -- # '[' -z 1604607 ']' 00:10:46.884 00:44:39 -- common/autotest_common.sh@940 -- # kill -0 1604607 00:10:46.884 00:44:39 -- common/autotest_common.sh@941 -- # uname 00:10:46.884 00:44:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:46.884 00:44:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1604607 00:10:46.884 00:44:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:46.884 00:44:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:46.884 00:44:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1604607' 00:10:46.884 killing process with pid 1604607 00:10:46.884 00:44:39 -- common/autotest_common.sh@955 -- # kill 1604607 00:10:46.884 00:44:39 -- common/autotest_common.sh@960 -- # wait 1604607 00:10:47.144 00:44:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:47.144 00:44:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:47.144 00:44:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:47.144 00:44:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:47.144 00:44:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:47.144 00:44:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.144 00:44:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.144 00:44:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.679 00:44:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:49.679 00:10:49.679 real 0m19.388s 00:10:49.679 user 0m50.661s 00:10:49.679 sys 0m5.563s 00:10:49.679 00:44:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:49.679 00:44:41 -- common/autotest_common.sh@10 -- # set +x 00:10:49.679 ************************************ 00:10:49.679 END TEST nvmf_ns_masking 00:10:49.679 ************************************ 00:10:49.679 00:44:41 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:49.679 00:44:41 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:49.679 00:44:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:49.679 00:44:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:49.679 00:44:41 -- common/autotest_common.sh@10 -- # set +x 00:10:49.679 ************************************ 00:10:49.679 START TEST nvmf_nvme_cli 00:10:49.679 ************************************ 00:10:49.679 00:44:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:49.679 * Looking for test storage... 00:10:49.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.680 00:44:42 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.680 00:44:42 -- nvmf/common.sh@7 -- # uname -s 00:10:49.680 00:44:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.680 00:44:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.680 00:44:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.680 00:44:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.680 00:44:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.680 00:44:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.680 00:44:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.680 00:44:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.680 00:44:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.680 00:44:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.680 00:44:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:49.680 00:44:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:49.680 00:44:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.680 00:44:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.680 00:44:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.680 00:44:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.680 00:44:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.680 00:44:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.680 00:44:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.680 00:44:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.680 00:44:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.680 00:44:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.680 00:44:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.680 00:44:42 -- paths/export.sh@5 -- # export PATH 00:10:49.680 00:44:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.680 00:44:42 -- nvmf/common.sh@47 -- # : 0 00:10:49.680 00:44:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.680 00:44:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.680 00:44:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.680 00:44:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.680 00:44:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.680 00:44:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.680 00:44:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.680 00:44:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.680 00:44:42 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.680 00:44:42 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.680 00:44:42 -- target/nvme_cli.sh@14 -- # devs=() 00:10:49.680 00:44:42 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:49.680 00:44:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:49.680 00:44:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.680 00:44:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:49.680 00:44:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:49.680 00:44:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:49.680 00:44:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.680 00:44:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.680 00:44:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.680 00:44:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:49.680 00:44:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:49.680 00:44:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:49.680 00:44:42 -- common/autotest_common.sh@10 -- # set +x 00:10:54.951 00:44:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:54.951 00:44:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.951 00:44:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.951 00:44:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.951 00:44:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.951 00:44:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.951 00:44:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.951 00:44:46 -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.951 00:44:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.951 00:44:46 -- nvmf/common.sh@296 -- # e810=() 00:10:54.951 00:44:46 -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.951 00:44:46 -- nvmf/common.sh@297 -- # x722=() 00:10:54.951 00:44:46 -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.951 00:44:46 -- nvmf/common.sh@298 -- # mlx=() 00:10:54.951 00:44:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.951 00:44:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.951 00:44:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.951 00:44:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.951 00:44:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.951 00:44:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.951 00:44:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.951 00:44:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.952 00:44:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.952 00:44:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.952 00:44:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.952 00:44:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.952 00:44:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.952 00:44:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:54.952 00:44:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.952 00:44:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.952 00:44:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:54.952 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:54.952 00:44:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.952 00:44:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:54.952 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:54.952 00:44:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.952 00:44:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.952 00:44:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.952 00:44:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:54.952 00:44:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.952 00:44:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:54.952 Found net devices under 0000:86:00.0: cvl_0_0 00:10:54.952 00:44:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.952 00:44:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.952 00:44:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.952 00:44:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:54.952 00:44:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.952 00:44:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:54.952 Found net devices under 0000:86:00.1: cvl_0_1 00:10:54.952 00:44:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.952 00:44:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:54.952 00:44:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:54.952 00:44:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:54.952 00:44:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:54.952 00:44:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.952 00:44:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.952 00:44:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.952 00:44:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:54.952 00:44:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.952 00:44:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.952 00:44:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:54.952 00:44:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.952 00:44:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.952 00:44:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:54.952 00:44:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:54.952 00:44:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.952 00:44:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.952 00:44:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.952 00:44:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.952 00:44:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:54.952 00:44:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.952 00:44:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.952 00:44:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.952 00:44:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:54.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:10:54.952 00:10:54.952 --- 10.0.0.2 ping statistics --- 00:10:54.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.952 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:10:54.952 00:44:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:10:54.952 00:10:54.952 --- 10.0.0.1 ping statistics --- 00:10:54.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.952 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:54.952 00:44:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.952 00:44:47 -- nvmf/common.sh@411 -- # return 0 00:10:54.952 00:44:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:54.952 00:44:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.952 00:44:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:54.952 00:44:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:54.952 00:44:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.952 00:44:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:54.952 00:44:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:54.952 00:44:47 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:54.952 00:44:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:54.952 00:44:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:54.952 00:44:47 -- common/autotest_common.sh@10 -- # set +x 00:10:54.952 00:44:47 -- nvmf/common.sh@470 -- # nvmfpid=1610135 00:10:54.952 00:44:47 -- nvmf/common.sh@471 -- # waitforlisten 1610135 00:10:54.952 00:44:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.952 00:44:47 -- common/autotest_common.sh@817 -- # '[' -z 1610135 ']' 00:10:54.952 00:44:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.952 00:44:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:54.952 00:44:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.952 00:44:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:54.952 00:44:47 -- common/autotest_common.sh@10 -- # set +x 00:10:54.952 [2024-04-27 00:44:47.306095] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:10:54.952 [2024-04-27 00:44:47.306139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.952 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.952 [2024-04-27 00:44:47.361455] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.952 [2024-04-27 00:44:47.438917] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.952 [2024-04-27 00:44:47.438958] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.952 [2024-04-27 00:44:47.438965] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.952 [2024-04-27 00:44:47.438971] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.952 [2024-04-27 00:44:47.438976] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.952 [2024-04-27 00:44:47.439044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.952 [2024-04-27 00:44:47.439063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.952 [2024-04-27 00:44:47.439149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.952 [2024-04-27 00:44:47.439151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.519 00:44:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:55.519 00:44:48 -- common/autotest_common.sh@850 -- # return 0 00:10:55.519 00:44:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:55.519 00:44:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:55.519 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:55.519 00:44:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.519 00:44:48 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.519 00:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.519 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:55.519 [2024-04-27 00:44:48.152919] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.519 00:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.520 00:44:48 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:55.520 00:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.520 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:55.520 Malloc0 00:10:55.520 00:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.520 00:44:48 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:55.520 00:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.520 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:55.520 Malloc1 00:10:55.520 00:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.520 00:44:48 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:55.520 00:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.520 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:55.520 00:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.520 00:44:48 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:55.520 00:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.520 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:55.778 00:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.778 00:44:48 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:55.778 00:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.778 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:55.778 00:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.778 00:44:48 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.778 00:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.778 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:55.778 [2024-04-27 00:44:48.230573] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.778 00:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.778 00:44:48 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:55.778 00:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.778 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:55.778 00:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.778 00:44:48 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:55.778 00:10:55.778 Discovery Log Number of Records 2, Generation counter 2 00:10:55.778 =====Discovery Log Entry 0====== 00:10:55.778 trtype: tcp 00:10:55.778 adrfam: ipv4 00:10:55.778 subtype: current discovery subsystem 00:10:55.778 treq: not required 00:10:55.778 portid: 0 00:10:55.778 trsvcid: 4420 00:10:55.778 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:55.778 traddr: 10.0.0.2 00:10:55.778 eflags: explicit discovery connections, duplicate discovery information 00:10:55.778 sectype: none 00:10:55.778 =====Discovery Log Entry 1====== 00:10:55.778 trtype: tcp 00:10:55.778 adrfam: ipv4 00:10:55.778 subtype: nvme subsystem 00:10:55.778 treq: not required 00:10:55.778 portid: 0 00:10:55.778 trsvcid: 4420 00:10:55.778 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:55.778 traddr: 10.0.0.2 00:10:55.778 eflags: none 00:10:55.778 sectype: none 00:10:55.778 00:44:48 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:55.778 00:44:48 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:55.778 00:44:48 -- nvmf/common.sh@511 -- # local dev _ 00:10:55.778 00:44:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:55.778 00:44:48 -- nvmf/common.sh@510 -- # nvme list 00:10:55.778 00:44:48 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:55.778 00:44:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:55.778 00:44:48 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:55.778 00:44:48 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:55.778 00:44:48 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:55.778 00:44:48 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.155 00:44:49 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:57.155 00:44:49 -- common/autotest_common.sh@1184 -- # local i=0 00:10:57.155 00:44:49 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.155 00:44:49 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:10:57.155 00:44:49 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:10:57.155 00:44:49 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:59.057 00:44:51 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:59.057 00:44:51 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:59.057 00:44:51 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.057 00:44:51 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:10:59.057 00:44:51 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.057 00:44:51 -- common/autotest_common.sh@1194 -- # return 0 00:10:59.057 00:44:51 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:59.057 00:44:51 -- nvmf/common.sh@511 -- # local dev _ 00:10:59.057 00:44:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:59.057 00:44:51 -- nvmf/common.sh@510 -- # nvme list 00:10:59.057 00:44:51 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:59.057 00:44:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:59.057 00:44:51 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:59.057 00:44:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:59.057 00:44:51 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:59.057 00:44:51 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:10:59.057 00:44:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:59.057 00:44:51 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:59.057 00:44:51 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:10:59.057 00:44:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:59.057 00:44:51 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:59.057 /dev/nvme0n1 ]] 00:10:59.057 00:44:51 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:59.058 00:44:51 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:59.058 00:44:51 -- nvmf/common.sh@511 -- # local dev _ 00:10:59.058 00:44:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:59.058 00:44:51 -- nvmf/common.sh@510 -- # nvme list 00:10:59.316 00:44:51 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:10:59.316 00:44:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:59.316 00:44:51 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:10:59.316 00:44:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:59.316 00:44:51 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:59.316 00:44:51 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:10:59.316 00:44:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:59.316 00:44:51 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:59.316 00:44:51 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:10:59.316 00:44:51 -- nvmf/common.sh@513 -- # read -r dev _ 00:10:59.316 00:44:51 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:59.316 00:44:51 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.574 00:44:52 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.574 00:44:52 -- common/autotest_common.sh@1205 -- # local i=0 00:10:59.574 00:44:52 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:59.574 00:44:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.574 00:44:52 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.574 00:44:52 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:59.574 00:44:52 -- common/autotest_common.sh@1217 -- # return 0 00:10:59.574 00:44:52 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:59.574 00:44:52 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.574 00:44:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.574 00:44:52 -- common/autotest_common.sh@10 -- # set +x 00:10:59.574 00:44:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.574 00:44:52 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:59.574 00:44:52 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:59.574 00:44:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:59.574 00:44:52 -- nvmf/common.sh@117 -- # sync 00:10:59.574 00:44:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:59.574 00:44:52 -- nvmf/common.sh@120 -- # set +e 00:10:59.574 00:44:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:59.574 00:44:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:59.574 rmmod nvme_tcp 00:10:59.574 rmmod nvme_fabrics 00:10:59.574 rmmod nvme_keyring 00:10:59.574 00:44:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:59.574 00:44:52 -- nvmf/common.sh@124 -- # set -e 00:10:59.574 00:44:52 -- nvmf/common.sh@125 -- # return 0 00:10:59.574 00:44:52 -- nvmf/common.sh@478 -- # '[' -n 1610135 ']' 00:10:59.575 00:44:52 -- nvmf/common.sh@479 -- # killprocess 1610135 00:10:59.575 00:44:52 -- common/autotest_common.sh@936 -- # '[' -z 1610135 ']' 00:10:59.575 00:44:52 -- common/autotest_common.sh@940 -- # kill -0 1610135 00:10:59.575 00:44:52 -- common/autotest_common.sh@941 -- # uname 00:10:59.575 00:44:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:59.575 00:44:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1610135 00:10:59.575 00:44:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:59.575 00:44:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:59.575 00:44:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1610135' 00:10:59.575 killing process with pid 1610135 00:10:59.575 00:44:52 -- common/autotest_common.sh@955 -- # kill 1610135 00:10:59.575 00:44:52 -- common/autotest_common.sh@960 -- # wait 1610135 00:10:59.834 00:44:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:59.834 00:44:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:59.834 00:44:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:59.834 00:44:52 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.834 00:44:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.834 00:44:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.834 00:44:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.834 00:44:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.433 00:44:54 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:02.433 00:11:02.433 real 0m12.530s 00:11:02.433 user 0m21.364s 00:11:02.433 sys 0m4.462s 00:11:02.433 00:44:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:02.433 00:44:54 -- common/autotest_common.sh@10 -- # set +x 00:11:02.433 ************************************ 00:11:02.433 END TEST nvmf_nvme_cli 00:11:02.433 ************************************ 00:11:02.433 00:44:54 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:02.433 00:44:54 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:02.433 00:44:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:02.433 00:44:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:02.433 00:44:54 -- common/autotest_common.sh@10 -- # set +x 00:11:02.433 ************************************ 00:11:02.433 START TEST nvmf_vfio_user 00:11:02.433 ************************************ 00:11:02.433 00:44:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:02.433 * Looking for test storage... 00:11:02.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.433 00:44:54 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.433 00:44:54 -- nvmf/common.sh@7 -- # uname -s 00:11:02.433 00:44:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.433 00:44:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.433 00:44:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.433 00:44:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.433 00:44:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.433 00:44:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.433 00:44:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.433 00:44:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.433 00:44:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.433 00:44:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.433 00:44:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:02.433 00:44:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:02.433 00:44:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.433 00:44:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.433 00:44:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.433 00:44:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.433 00:44:54 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.433 00:44:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.433 00:44:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.433 00:44:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.433 00:44:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.433 00:44:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.433 00:44:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.433 00:44:54 -- paths/export.sh@5 -- # export PATH 00:11:02.434 00:44:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.434 00:44:54 -- nvmf/common.sh@47 -- # : 0 00:11:02.434 00:44:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.434 00:44:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.434 00:44:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.434 00:44:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.434 00:44:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.434 00:44:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.434 00:44:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.434 00:44:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1611636 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1611636' 00:11:02.434 Process pid: 1611636 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1611636 00:11:02.434 00:44:54 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:02.434 00:44:54 -- common/autotest_common.sh@817 -- # '[' -z 1611636 ']' 00:11:02.434 00:44:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.434 00:44:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:02.434 00:44:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.434 00:44:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:02.434 00:44:54 -- common/autotest_common.sh@10 -- # set +x 00:11:02.434 [2024-04-27 00:44:54.871960] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:11:02.434 [2024-04-27 00:44:54.872005] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.434 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.434 [2024-04-27 00:44:54.926869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.434 [2024-04-27 00:44:55.005067] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.434 [2024-04-27 00:44:55.005107] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.434 [2024-04-27 00:44:55.005114] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.434 [2024-04-27 00:44:55.005120] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.434 [2024-04-27 00:44:55.005125] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.434 [2024-04-27 00:44:55.005164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.434 [2024-04-27 00:44:55.005257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.434 [2024-04-27 00:44:55.005348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.434 [2024-04-27 00:44:55.005349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.001 00:44:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:03.001 00:44:55 -- common/autotest_common.sh@850 -- # return 0 00:11:03.001 00:44:55 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:04.374 00:44:56 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:04.374 00:44:56 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:04.374 00:44:56 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:04.374 00:44:56 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:04.374 00:44:56 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:04.374 00:44:56 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:04.374 Malloc1 00:11:04.374 00:44:57 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:04.631 00:44:57 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:04.889 00:44:57 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:05.148 00:44:57 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:05.148 00:44:57 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:05.148 00:44:57 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:05.148 Malloc2 00:11:05.148 00:44:57 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:05.406 00:44:57 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:05.665 00:44:58 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:05.665 00:44:58 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:05.665 00:44:58 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:05.665 00:44:58 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:05.665 00:44:58 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:05.665 00:44:58 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:05.665 00:44:58 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:05.925 [2024-04-27 00:44:58.371077] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:11:05.925 [2024-04-27 00:44:58.371121] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612143 ] 00:11:05.925 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.925 [2024-04-27 00:44:58.401586] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:05.925 [2024-04-27 00:44:58.403929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:05.925 [2024-04-27 00:44:58.403946] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f77cabbc000 00:11:05.925 [2024-04-27 00:44:58.404929] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:05.925 [2024-04-27 00:44:58.405935] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:05.925 [2024-04-27 00:44:58.406939] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:05.925 [2024-04-27 00:44:58.407941] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:05.925 [2024-04-27 00:44:58.408940] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:05.925 [2024-04-27 00:44:58.409956] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:05.925 [2024-04-27 00:44:58.410957] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:05.925 [2024-04-27 00:44:58.411962] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:05.925 [2024-04-27 00:44:58.412972] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:05.925 [2024-04-27 00:44:58.412983] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f77cabb1000 00:11:05.925 [2024-04-27 00:44:58.413925] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:05.926 [2024-04-27 00:44:58.426537] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:05.926 [2024-04-27 00:44:58.426562] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:05.926 [2024-04-27 00:44:58.429086] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:05.926 [2024-04-27 00:44:58.429123] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:05.926 [2024-04-27 00:44:58.429195] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:05.926 [2024-04-27 00:44:58.429217] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:05.926 [2024-04-27 00:44:58.429222] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:05.926 [2024-04-27 00:44:58.430087] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:05.926 [2024-04-27 00:44:58.430096] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:05.926 [2024-04-27 00:44:58.430103] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:05.926 [2024-04-27 00:44:58.431090] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:05.926 [2024-04-27 00:44:58.431098] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:05.926 [2024-04-27 00:44:58.431105] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:05.926 [2024-04-27 00:44:58.432098] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:05.926 [2024-04-27 00:44:58.432106] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:05.926 [2024-04-27 00:44:58.433100] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:05.926 [2024-04-27 00:44:58.433108] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:05.926 [2024-04-27 00:44:58.433113] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:05.926 [2024-04-27 00:44:58.433119] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:05.926 [2024-04-27 00:44:58.433224] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:05.926 [2024-04-27 00:44:58.433229] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:05.926 [2024-04-27 00:44:58.433233] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:05.926 [2024-04-27 00:44:58.434106] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:05.926 [2024-04-27 00:44:58.435109] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:05.926 [2024-04-27 00:44:58.436117] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:05.926 [2024-04-27 00:44:58.437111] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:05.926 [2024-04-27 00:44:58.437173] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:05.926 [2024-04-27 00:44:58.438128] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:05.926 [2024-04-27 00:44:58.438135] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:05.926 [2024-04-27 00:44:58.438139] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438159] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:05.926 [2024-04-27 00:44:58.438171] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438188] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:05.926 [2024-04-27 00:44:58.438193] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:05.926 [2024-04-27 00:44:58.438206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438255] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:05.926 [2024-04-27 00:44:58.438259] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:05.926 [2024-04-27 00:44:58.438264] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:05.926 [2024-04-27 00:44:58.438268] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:05.926 [2024-04-27 00:44:58.438273] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:05.926 [2024-04-27 00:44:58.438277] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:05.926 [2024-04-27 00:44:58.438281] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438288] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438297] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.926 [2024-04-27 00:44:58.438329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.926 [2024-04-27 00:44:58.438337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.926 [2024-04-27 00:44:58.438344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.926 [2024-04-27 00:44:58.438348] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438356] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438377] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:05.926 [2024-04-27 00:44:58.438385] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438393] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438399] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438453] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438460] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438467] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:05.926 [2024-04-27 00:44:58.438471] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:05.926 [2024-04-27 00:44:58.438477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438498] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:05.926 [2024-04-27 00:44:58.438506] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438513] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438519] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:05.926 [2024-04-27 00:44:58.438523] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:05.926 [2024-04-27 00:44:58.438528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438558] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438565] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438570] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:05.926 [2024-04-27 00:44:58.438574] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:05.926 [2024-04-27 00:44:58.438580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438602] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438608] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438617] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438622] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438627] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438631] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:05.926 [2024-04-27 00:44:58.438635] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:05.926 [2024-04-27 00:44:58.438640] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:05.926 [2024-04-27 00:44:58.438658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438728] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:05.926 [2024-04-27 00:44:58.438732] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:05.926 [2024-04-27 00:44:58.438735] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:05.926 [2024-04-27 00:44:58.438738] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:05.926 [2024-04-27 00:44:58.438743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:05.926 [2024-04-27 00:44:58.438750] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:05.926 [2024-04-27 00:44:58.438754] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:05.926 [2024-04-27 00:44:58.438759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438765] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:05.926 [2024-04-27 00:44:58.438769] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:05.926 [2024-04-27 00:44:58.438774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438781] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:05.926 [2024-04-27 00:44:58.438785] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:05.926 [2024-04-27 00:44:58.438792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:05.926 [2024-04-27 00:44:58.438798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:05.926 [2024-04-27 00:44:58.438824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:05.926 ===================================================== 00:11:05.926 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:05.926 ===================================================== 00:11:05.926 Controller Capabilities/Features 00:11:05.926 ================================ 00:11:05.926 Vendor ID: 4e58 00:11:05.926 Subsystem Vendor ID: 4e58 00:11:05.926 Serial Number: SPDK1 00:11:05.926 Model Number: SPDK bdev Controller 00:11:05.926 Firmware Version: 24.05 00:11:05.926 Recommended Arb Burst: 6 00:11:05.926 IEEE OUI Identifier: 8d 6b 50 00:11:05.926 Multi-path I/O 00:11:05.926 May have multiple subsystem ports: Yes 00:11:05.926 May have multiple controllers: Yes 00:11:05.926 Associated with SR-IOV VF: No 00:11:05.926 Max Data Transfer Size: 131072 00:11:05.926 Max Number of Namespaces: 32 00:11:05.926 Max Number of I/O Queues: 127 00:11:05.926 NVMe Specification Version (VS): 1.3 00:11:05.926 NVMe Specification Version (Identify): 1.3 00:11:05.926 Maximum Queue Entries: 256 00:11:05.926 Contiguous Queues Required: Yes 00:11:05.926 Arbitration Mechanisms Supported 00:11:05.926 Weighted Round Robin: Not Supported 00:11:05.926 Vendor Specific: Not Supported 00:11:05.926 Reset Timeout: 15000 ms 00:11:05.926 Doorbell Stride: 4 bytes 00:11:05.926 NVM Subsystem Reset: Not Supported 00:11:05.926 Command Sets Supported 00:11:05.926 NVM Command Set: Supported 00:11:05.926 Boot Partition: Not Supported 00:11:05.926 Memory Page Size Minimum: 4096 bytes 00:11:05.926 Memory Page Size Maximum: 4096 bytes 00:11:05.926 Persistent Memory Region: Not Supported 00:11:05.926 Optional Asynchronous Events Supported 00:11:05.926 Namespace Attribute Notices: Supported 00:11:05.926 Firmware Activation Notices: Not Supported 00:11:05.926 ANA Change Notices: Not Supported 00:11:05.926 PLE Aggregate Log Change Notices: Not Supported 00:11:05.926 LBA Status Info Alert Notices: Not Supported 00:11:05.926 EGE Aggregate Log Change Notices: Not Supported 00:11:05.926 Normal NVM Subsystem Shutdown event: Not Supported 00:11:05.926 Zone Descriptor Change Notices: Not Supported 00:11:05.926 Discovery Log Change Notices: Not Supported 00:11:05.926 Controller Attributes 00:11:05.926 128-bit Host Identifier: Supported 00:11:05.926 Non-Operational Permissive Mode: Not Supported 00:11:05.926 NVM Sets: Not Supported 00:11:05.926 Read Recovery Levels: Not Supported 00:11:05.926 Endurance Groups: Not Supported 00:11:05.926 Predictable Latency Mode: Not Supported 00:11:05.926 Traffic Based Keep ALive: Not Supported 00:11:05.926 Namespace Granularity: Not Supported 00:11:05.926 SQ Associations: Not Supported 00:11:05.926 UUID List: Not Supported 00:11:05.926 Multi-Domain Subsystem: Not Supported 00:11:05.926 Fixed Capacity Management: Not Supported 00:11:05.926 Variable Capacity Management: Not Supported 00:11:05.926 Delete Endurance Group: Not Supported 00:11:05.926 Delete NVM Set: Not Supported 00:11:05.926 Extended LBA Formats Supported: Not Supported 00:11:05.926 Flexible Data Placement Supported: Not Supported 00:11:05.926 00:11:05.926 Controller Memory Buffer Support 00:11:05.926 ================================ 00:11:05.926 Supported: No 00:11:05.926 00:11:05.926 Persistent Memory Region Support 00:11:05.926 ================================ 00:11:05.926 Supported: No 00:11:05.926 00:11:05.926 Admin Command Set Attributes 00:11:05.926 ============================ 00:11:05.926 Security Send/Receive: Not Supported 00:11:05.926 Format NVM: Not Supported 00:11:05.926 Firmware Activate/Download: Not Supported 00:11:05.926 Namespace Management: Not Supported 00:11:05.926 Device Self-Test: Not Supported 00:11:05.926 Directives: Not Supported 00:11:05.926 NVMe-MI: Not Supported 00:11:05.926 Virtualization Management: Not Supported 00:11:05.926 Doorbell Buffer Config: Not Supported 00:11:05.926 Get LBA Status Capability: Not Supported 00:11:05.926 Command & Feature Lockdown Capability: Not Supported 00:11:05.926 Abort Command Limit: 4 00:11:05.926 Async Event Request Limit: 4 00:11:05.926 Number of Firmware Slots: N/A 00:11:05.926 Firmware Slot 1 Read-Only: N/A 00:11:05.926 Firmware Activation Without Reset: N/A 00:11:05.926 Multiple Update Detection Support: N/A 00:11:05.926 Firmware Update Granularity: No Information Provided 00:11:05.926 Per-Namespace SMART Log: No 00:11:05.926 Asymmetric Namespace Access Log Page: Not Supported 00:11:05.926 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:05.926 Command Effects Log Page: Supported 00:11:05.926 Get Log Page Extended Data: Supported 00:11:05.926 Telemetry Log Pages: Not Supported 00:11:05.927 Persistent Event Log Pages: Not Supported 00:11:05.927 Supported Log Pages Log Page: May Support 00:11:05.927 Commands Supported & Effects Log Page: Not Supported 00:11:05.927 Feature Identifiers & Effects Log Page:May Support 00:11:05.927 NVMe-MI Commands & Effects Log Page: May Support 00:11:05.927 Data Area 4 for Telemetry Log: Not Supported 00:11:05.927 Error Log Page Entries Supported: 128 00:11:05.927 Keep Alive: Supported 00:11:05.927 Keep Alive Granularity: 10000 ms 00:11:05.927 00:11:05.927 NVM Command Set Attributes 00:11:05.927 ========================== 00:11:05.927 Submission Queue Entry Size 00:11:05.927 Max: 64 00:11:05.927 Min: 64 00:11:05.927 Completion Queue Entry Size 00:11:05.927 Max: 16 00:11:05.927 Min: 16 00:11:05.927 Number of Namespaces: 32 00:11:05.927 Compare Command: Supported 00:11:05.927 Write Uncorrectable Command: Not Supported 00:11:05.927 Dataset Management Command: Supported 00:11:05.927 Write Zeroes Command: Supported 00:11:05.927 Set Features Save Field: Not Supported 00:11:05.927 Reservations: Not Supported 00:11:05.927 Timestamp: Not Supported 00:11:05.927 Copy: Supported 00:11:05.927 Volatile Write Cache: Present 00:11:05.927 Atomic Write Unit (Normal): 1 00:11:05.927 Atomic Write Unit (PFail): 1 00:11:05.927 Atomic Compare & Write Unit: 1 00:11:05.927 Fused Compare & Write: Supported 00:11:05.927 Scatter-Gather List 00:11:05.927 SGL Command Set: Supported (Dword aligned) 00:11:05.927 SGL Keyed: Not Supported 00:11:05.927 SGL Bit Bucket Descriptor: Not Supported 00:11:05.927 SGL Metadata Pointer: Not Supported 00:11:05.927 Oversized SGL: Not Supported 00:11:05.927 SGL Metadata Address: Not Supported 00:11:05.927 SGL Offset: Not Supported 00:11:05.927 Transport SGL Data Block: Not Supported 00:11:05.927 Replay Protected Memory Block: Not Supported 00:11:05.927 00:11:05.927 Firmware Slot Information 00:11:05.927 ========================= 00:11:05.927 Active slot: 1 00:11:05.927 Slot 1 Firmware Revision: 24.05 00:11:05.927 00:11:05.927 00:11:05.927 Commands Supported and Effects 00:11:05.927 ============================== 00:11:05.927 Admin Commands 00:11:05.927 -------------- 00:11:05.927 Get Log Page (02h): Supported 00:11:05.927 Identify (06h): Supported 00:11:05.927 Abort (08h): Supported 00:11:05.927 Set Features (09h): Supported 00:11:05.927 Get Features (0Ah): Supported 00:11:05.927 Asynchronous Event Request (0Ch): Supported 00:11:05.927 Keep Alive (18h): Supported 00:11:05.927 I/O Commands 00:11:05.927 ------------ 00:11:05.927 Flush (00h): Supported LBA-Change 00:11:05.927 Write (01h): Supported LBA-Change 00:11:05.927 Read (02h): Supported 00:11:05.927 Compare (05h): Supported 00:11:05.927 Write Zeroes (08h): Supported LBA-Change 00:11:05.927 Dataset Management (09h): Supported LBA-Change 00:11:05.927 Copy (19h): Supported LBA-Change 00:11:05.927 Unknown (79h): Supported LBA-Change 00:11:05.927 Unknown (7Ah): Supported 00:11:05.927 00:11:05.927 Error Log 00:11:05.927 ========= 00:11:05.927 00:11:05.927 Arbitration 00:11:05.927 =========== 00:11:05.927 Arbitration Burst: 1 00:11:05.927 00:11:05.927 Power Management 00:11:05.927 ================ 00:11:05.927 Number of Power States: 1 00:11:05.927 Current Power State: Power State #0 00:11:05.927 Power State #0: 00:11:05.927 Max Power: 0.00 W 00:11:05.927 Non-Operational State: Operational 00:11:05.927 Entry Latency: Not Reported 00:11:05.927 Exit Latency: Not Reported 00:11:05.927 Relative Read Throughput: 0 00:11:05.927 Relative Read Latency: 0 00:11:05.927 Relative Write Throughput: 0 00:11:05.927 Relative Write Latency: 0 00:11:05.927 Idle Power: Not Reported 00:11:05.927 Active Power: Not Reported 00:11:05.927 Non-Operational Permissive Mode: Not Supported 00:11:05.927 00:11:05.927 Health Information 00:11:05.927 ================== 00:11:05.927 Critical Warnings: 00:11:05.927 Available Spare Space: OK 00:11:05.927 Temperature: OK 00:11:05.927 Device Reliability: OK 00:11:05.927 Read Only: No 00:11:05.927 Volatile Memory Backup: OK 00:11:05.927 Current Temperature: 0 Kelvin (-2[2024-04-27 00:44:58.438916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:05.927 [2024-04-27 00:44:58.438925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:05.927 [2024-04-27 00:44:58.438947] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:05.927 [2024-04-27 00:44:58.438956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.927 [2024-04-27 00:44:58.438962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.927 [2024-04-27 00:44:58.438967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.927 [2024-04-27 00:44:58.438972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.927 [2024-04-27 00:44:58.441079] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:05.927 [2024-04-27 00:44:58.441090] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:05.927 [2024-04-27 00:44:58.441144] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:05.927 [2024-04-27 00:44:58.441189] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:05.927 [2024-04-27 00:44:58.441195] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:05.927 [2024-04-27 00:44:58.442153] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:05.927 [2024-04-27 00:44:58.442163] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:05.927 [2024-04-27 00:44:58.442210] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:05.927 [2024-04-27 00:44:58.444191] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:05.927 73 Celsius) 00:11:05.927 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:05.927 Available Spare: 0% 00:11:05.927 Available Spare Threshold: 0% 00:11:05.927 Life Percentage Used: 0% 00:11:05.927 Data Units Read: 0 00:11:05.927 Data Units Written: 0 00:11:05.927 Host Read Commands: 0 00:11:05.927 Host Write Commands: 0 00:11:05.927 Controller Busy Time: 0 minutes 00:11:05.927 Power Cycles: 0 00:11:05.927 Power On Hours: 0 hours 00:11:05.927 Unsafe Shutdowns: 0 00:11:05.927 Unrecoverable Media Errors: 0 00:11:05.927 Lifetime Error Log Entries: 0 00:11:05.927 Warning Temperature Time: 0 minutes 00:11:05.927 Critical Temperature Time: 0 minutes 00:11:05.927 00:11:05.927 Number of Queues 00:11:05.927 ================ 00:11:05.927 Number of I/O Submission Queues: 127 00:11:05.927 Number of I/O Completion Queues: 127 00:11:05.927 00:11:05.927 Active Namespaces 00:11:05.927 ================= 00:11:05.927 Namespace ID:1 00:11:05.927 Error Recovery Timeout: Unlimited 00:11:05.927 Command Set Identifier: NVM (00h) 00:11:05.927 Deallocate: Supported 00:11:05.927 Deallocated/Unwritten Error: Not Supported 00:11:05.927 Deallocated Read Value: Unknown 00:11:05.927 Deallocate in Write Zeroes: Not Supported 00:11:05.927 Deallocated Guard Field: 0xFFFF 00:11:05.927 Flush: Supported 00:11:05.927 Reservation: Supported 00:11:05.927 Namespace Sharing Capabilities: Multiple Controllers 00:11:05.927 Size (in LBAs): 131072 (0GiB) 00:11:05.927 Capacity (in LBAs): 131072 (0GiB) 00:11:05.927 Utilization (in LBAs): 131072 (0GiB) 00:11:05.927 NGUID: 037946CB38564CD58ED09BF3E3502A34 00:11:05.927 UUID: 037946cb-3856-4cd5-8ed0-9bf3e3502a34 00:11:05.927 Thin Provisioning: Not Supported 00:11:05.927 Per-NS Atomic Units: Yes 00:11:05.927 Atomic Boundary Size (Normal): 0 00:11:05.927 Atomic Boundary Size (PFail): 0 00:11:05.927 Atomic Boundary Offset: 0 00:11:05.927 Maximum Single Source Range Length: 65535 00:11:05.927 Maximum Copy Length: 65535 00:11:05.927 Maximum Source Range Count: 1 00:11:05.927 NGUID/EUI64 Never Reused: No 00:11:05.927 Namespace Write Protected: No 00:11:05.927 Number of LBA Formats: 1 00:11:05.927 Current LBA Format: LBA Format #00 00:11:05.927 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:05.927 00:11:05.927 00:44:58 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:05.927 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.186 [2024-04-27 00:44:58.651836] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:11.474 [2024-04-27 00:45:03.671041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:11.474 Initializing NVMe Controllers 00:11:11.474 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:11.474 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:11.474 Initialization complete. Launching workers. 00:11:11.474 ======================================================== 00:11:11.474 Latency(us) 00:11:11.474 Device Information : IOPS MiB/s Average min max 00:11:11.474 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39851.84 155.67 3211.71 983.92 10610.95 00:11:11.474 ======================================================== 00:11:11.474 Total : 39851.84 155.67 3211.71 983.92 10610.95 00:11:11.474 00:11:11.474 00:45:03 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:11.474 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.474 [2024-04-27 00:45:03.886018] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:16.741 [2024-04-27 00:45:08.919172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:16.741 Initializing NVMe Controllers 00:11:16.741 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:16.741 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:16.741 Initialization complete. Launching workers. 00:11:16.741 ======================================================== 00:11:16.741 Latency(us) 00:11:16.741 Device Information : IOPS MiB/s Average min max 00:11:16.741 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16044.00 62.67 7977.37 6062.96 14390.81 00:11:16.741 ======================================================== 00:11:16.741 Total : 16044.00 62.67 7977.37 6062.96 14390.81 00:11:16.741 00:11:16.741 00:45:08 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:16.741 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.741 [2024-04-27 00:45:09.111103] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:22.003 [2024-04-27 00:45:14.176375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:22.003 Initializing NVMe Controllers 00:11:22.003 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:22.003 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:22.003 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:22.003 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:22.003 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:22.003 Initialization complete. Launching workers. 00:11:22.003 Starting thread on core 2 00:11:22.003 Starting thread on core 3 00:11:22.003 Starting thread on core 1 00:11:22.003 00:45:14 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:22.003 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.003 [2024-04-27 00:45:14.458535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:25.290 [2024-04-27 00:45:17.515604] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:25.290 Initializing NVMe Controllers 00:11:25.290 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:25.290 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:25.290 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:25.290 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:25.290 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:25.290 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:25.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:25.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:25.290 Initialization complete. Launching workers. 00:11:25.290 Starting thread on core 1 with urgent priority queue 00:11:25.290 Starting thread on core 2 with urgent priority queue 00:11:25.290 Starting thread on core 3 with urgent priority queue 00:11:25.290 Starting thread on core 0 with urgent priority queue 00:11:25.290 SPDK bdev Controller (SPDK1 ) core 0: 9502.33 IO/s 10.52 secs/100000 ios 00:11:25.290 SPDK bdev Controller (SPDK1 ) core 1: 7577.33 IO/s 13.20 secs/100000 ios 00:11:25.290 SPDK bdev Controller (SPDK1 ) core 2: 9105.67 IO/s 10.98 secs/100000 ios 00:11:25.290 SPDK bdev Controller (SPDK1 ) core 3: 8812.00 IO/s 11.35 secs/100000 ios 00:11:25.290 ======================================================== 00:11:25.290 00:11:25.290 00:45:17 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:25.290 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.290 [2024-04-27 00:45:17.790559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:25.290 [2024-04-27 00:45:17.824784] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:25.290 Initializing NVMe Controllers 00:11:25.290 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:25.290 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:25.290 Namespace ID: 1 size: 0GB 00:11:25.290 Initialization complete. 00:11:25.290 INFO: using host memory buffer for IO 00:11:25.290 Hello world! 00:11:25.290 00:45:17 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:25.290 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.549 [2024-04-27 00:45:18.083152] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:26.485 Initializing NVMe Controllers 00:11:26.485 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:26.485 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:26.485 Initialization complete. Launching workers. 00:11:26.485 submit (in ns) avg, min, max = 7530.5, 3256.5, 3999483.5 00:11:26.485 complete (in ns) avg, min, max = 19271.6, 1750.4, 3998705.2 00:11:26.485 00:11:26.485 Submit histogram 00:11:26.485 ================ 00:11:26.485 Range in us Cumulative Count 00:11:26.485 3.256 - 3.270: 0.0299% ( 5) 00:11:26.485 3.270 - 3.283: 0.1135% ( 14) 00:11:26.485 3.283 - 3.297: 0.3047% ( 32) 00:11:26.485 3.297 - 3.311: 1.0932% ( 132) 00:11:26.485 3.311 - 3.325: 3.8829% ( 467) 00:11:26.485 3.325 - 3.339: 8.3513% ( 748) 00:11:26.485 3.339 - 3.353: 13.6380% ( 885) 00:11:26.485 3.353 - 3.367: 19.5102% ( 983) 00:11:26.485 3.367 - 3.381: 25.4122% ( 988) 00:11:26.485 3.381 - 3.395: 31.0693% ( 947) 00:11:26.485 3.395 - 3.409: 36.6189% ( 929) 00:11:26.485 3.409 - 3.423: 42.3178% ( 954) 00:11:26.485 3.423 - 3.437: 46.6487% ( 725) 00:11:26.485 3.437 - 3.450: 51.2963% ( 778) 00:11:26.485 3.450 - 3.464: 57.0131% ( 957) 00:11:26.485 3.464 - 3.478: 62.8674% ( 980) 00:11:26.485 3.478 - 3.492: 67.3417% ( 749) 00:11:26.486 3.492 - 3.506: 72.5090% ( 865) 00:11:26.486 3.506 - 3.520: 77.3118% ( 804) 00:11:26.486 3.520 - 3.534: 80.9319% ( 606) 00:11:26.486 3.534 - 3.548: 83.6022% ( 447) 00:11:26.486 3.548 - 3.562: 85.4480% ( 309) 00:11:26.486 3.562 - 3.590: 87.3477% ( 318) 00:11:26.486 3.590 - 3.617: 88.3931% ( 175) 00:11:26.486 3.617 - 3.645: 89.8268% ( 240) 00:11:26.486 3.645 - 3.673: 91.3501% ( 255) 00:11:26.486 3.673 - 3.701: 92.9630% ( 270) 00:11:26.486 3.701 - 3.729: 94.7551% ( 300) 00:11:26.486 3.729 - 3.757: 96.4994% ( 292) 00:11:26.486 3.757 - 3.784: 97.6941% ( 200) 00:11:26.486 3.784 - 3.812: 98.5544% ( 144) 00:11:26.486 3.812 - 3.840: 99.0502% ( 83) 00:11:26.486 3.840 - 3.868: 99.4325% ( 64) 00:11:26.486 3.868 - 3.896: 99.5400% ( 18) 00:11:26.486 3.896 - 3.923: 99.6117% ( 12) 00:11:26.486 3.923 - 3.951: 99.6177% ( 1) 00:11:26.486 3.951 - 3.979: 99.6296% ( 2) 00:11:26.486 4.007 - 4.035: 99.6416% ( 2) 00:11:26.486 4.981 - 5.009: 99.6476% ( 1) 00:11:26.486 5.009 - 5.037: 99.6535% ( 1) 00:11:26.486 5.315 - 5.343: 99.6595% ( 1) 00:11:26.486 5.370 - 5.398: 99.6655% ( 1) 00:11:26.486 5.537 - 5.565: 99.6714% ( 1) 00:11:26.486 5.593 - 5.621: 99.6774% ( 1) 00:11:26.486 5.649 - 5.677: 99.6834% ( 1) 00:11:26.486 5.677 - 5.704: 99.6953% ( 2) 00:11:26.486 5.704 - 5.732: 99.7013% ( 1) 00:11:26.486 5.760 - 5.788: 99.7073% ( 1) 00:11:26.486 5.843 - 5.871: 99.7192% ( 2) 00:11:26.486 5.955 - 5.983: 99.7312% ( 2) 00:11:26.486 6.122 - 6.150: 99.7372% ( 1) 00:11:26.486 6.177 - 6.205: 99.7491% ( 2) 00:11:26.486 6.289 - 6.317: 99.7551% ( 1) 00:11:26.486 6.344 - 6.372: 99.7611% ( 1) 00:11:26.486 6.400 - 6.428: 99.7670% ( 1) 00:11:26.486 6.428 - 6.456: 99.7730% ( 1) 00:11:26.486 6.456 - 6.483: 99.7790% ( 1) 00:11:26.486 6.483 - 6.511: 99.7849% ( 1) 00:11:26.486 6.511 - 6.539: 99.7909% ( 1) 00:11:26.486 6.539 - 6.567: 99.7969% ( 1) 00:11:26.486 6.650 - 6.678: 99.8029% ( 1) 00:11:26.486 6.762 - 6.790: 99.8088% ( 1) 00:11:26.486 6.957 - 6.984: 99.8148% ( 1) 00:11:26.486 7.096 - 7.123: 99.8208% ( 1) 00:11:26.486 7.123 - 7.179: 99.8387% ( 3) 00:11:26.486 7.346 - 7.402: 99.8447% ( 1) 00:11:26.486 7.513 - 7.569: 99.8507% ( 1) 00:11:26.486 7.569 - 7.624: 99.8626% ( 2) 00:11:26.486 7.624 - 7.680: 99.8686% ( 1) 00:11:26.486 7.680 - 7.736: 99.8746% ( 1) 00:11:26.486 8.181 - 8.237: 99.8805% ( 1) 00:11:26.486 14.358 - 14.470: 99.8865% ( 1) 00:11:26.486 14.803 - 14.915: 99.8925% ( 1) 00:11:26.486 18.254 - 18.365: 99.8984% ( 1) 00:11:26.486 3989.148 - 4017.642: 100.0000% ( 17) 00:11:26.486 00:11:26.486 Complete histogram 00:11:26.486 ================== 00:11:26.486 Range in us Cumulative Count 00:11:26.486 1.746 - 1.753: 0.0119% ( 2) 00:11:26.486 1.781 - 1.795: 0.5078% ( 83) 00:11:26.486 1.795 - [2024-04-27 00:45:19.104072] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:26.486 1.809: 11.0215% ( 1760) 00:11:26.486 1.809 - 1.823: 16.7921% ( 966) 00:11:26.486 1.823 - 1.837: 19.1278% ( 391) 00:11:26.486 1.837 - 1.850: 48.8172% ( 4970) 00:11:26.486 1.850 - 1.864: 88.0048% ( 6560) 00:11:26.486 1.864 - 1.878: 94.1039% ( 1021) 00:11:26.486 1.878 - 1.892: 96.3202% ( 371) 00:11:26.486 1.892 - 1.906: 96.8578% ( 90) 00:11:26.486 1.906 - 1.920: 97.6762% ( 137) 00:11:26.486 1.920 - 1.934: 98.7276% ( 176) 00:11:26.486 1.934 - 1.948: 99.1935% ( 78) 00:11:26.486 1.948 - 1.962: 99.3369% ( 24) 00:11:26.486 1.962 - 1.976: 99.3728% ( 6) 00:11:26.486 1.976 - 1.990: 99.3907% ( 3) 00:11:26.486 1.990 - 2.003: 99.3967% ( 1) 00:11:26.486 2.157 - 2.170: 99.4026% ( 1) 00:11:26.486 2.254 - 2.268: 99.4086% ( 1) 00:11:26.486 2.407 - 2.421: 99.4146% ( 1) 00:11:26.486 2.532 - 2.546: 99.4205% ( 1) 00:11:26.486 3.590 - 3.617: 99.4265% ( 1) 00:11:26.486 4.007 - 4.035: 99.4325% ( 1) 00:11:26.486 4.202 - 4.230: 99.4385% ( 1) 00:11:26.486 4.341 - 4.369: 99.4444% ( 1) 00:11:26.486 4.424 - 4.452: 99.4564% ( 2) 00:11:26.486 4.508 - 4.536: 99.4624% ( 1) 00:11:26.486 4.563 - 4.591: 99.4743% ( 2) 00:11:26.486 4.981 - 5.009: 99.4803% ( 1) 00:11:26.486 5.203 - 5.231: 99.4863% ( 1) 00:11:26.486 5.287 - 5.315: 99.4922% ( 1) 00:11:26.486 5.510 - 5.537: 99.4982% ( 1) 00:11:26.486 5.537 - 5.565: 99.5042% ( 1) 00:11:26.486 5.732 - 5.760: 99.5102% ( 1) 00:11:26.486 5.899 - 5.927: 99.5161% ( 1) 00:11:26.486 5.927 - 5.955: 99.5221% ( 1) 00:11:26.486 6.400 - 6.428: 99.5281% ( 1) 00:11:26.486 6.483 - 6.511: 99.5341% ( 1) 00:11:26.486 6.511 - 6.539: 99.5400% ( 1) 00:11:26.486 6.623 - 6.650: 99.5460% ( 1) 00:11:26.486 11.186 - 11.242: 99.5520% ( 1) 00:11:26.486 13.023 - 13.078: 99.5579% ( 1) 00:11:26.486 15.249 - 15.360: 99.5639% ( 1) 00:11:26.486 3989.148 - 4017.642: 100.0000% ( 73) 00:11:26.486 00:11:26.486 00:45:19 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:26.486 00:45:19 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:26.486 00:45:19 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:26.486 00:45:19 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:26.486 00:45:19 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:26.745 [2024-04-27 00:45:19.297868] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:11:26.745 [ 00:11:26.745 { 00:11:26.745 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:26.745 "subtype": "Discovery", 00:11:26.745 "listen_addresses": [], 00:11:26.745 "allow_any_host": true, 00:11:26.745 "hosts": [] 00:11:26.745 }, 00:11:26.745 { 00:11:26.745 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:26.745 "subtype": "NVMe", 00:11:26.745 "listen_addresses": [ 00:11:26.745 { 00:11:26.745 "transport": "VFIOUSER", 00:11:26.745 "trtype": "VFIOUSER", 00:11:26.745 "adrfam": "IPv4", 00:11:26.745 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:26.745 "trsvcid": "0" 00:11:26.745 } 00:11:26.745 ], 00:11:26.745 "allow_any_host": true, 00:11:26.745 "hosts": [], 00:11:26.745 "serial_number": "SPDK1", 00:11:26.745 "model_number": "SPDK bdev Controller", 00:11:26.745 "max_namespaces": 32, 00:11:26.745 "min_cntlid": 1, 00:11:26.745 "max_cntlid": 65519, 00:11:26.745 "namespaces": [ 00:11:26.745 { 00:11:26.745 "nsid": 1, 00:11:26.745 "bdev_name": "Malloc1", 00:11:26.745 "name": "Malloc1", 00:11:26.745 "nguid": "037946CB38564CD58ED09BF3E3502A34", 00:11:26.745 "uuid": "037946cb-3856-4cd5-8ed0-9bf3e3502a34" 00:11:26.745 } 00:11:26.745 ] 00:11:26.745 }, 00:11:26.745 { 00:11:26.745 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:26.745 "subtype": "NVMe", 00:11:26.745 "listen_addresses": [ 00:11:26.745 { 00:11:26.745 "transport": "VFIOUSER", 00:11:26.745 "trtype": "VFIOUSER", 00:11:26.745 "adrfam": "IPv4", 00:11:26.745 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:26.745 "trsvcid": "0" 00:11:26.745 } 00:11:26.745 ], 00:11:26.745 "allow_any_host": true, 00:11:26.745 "hosts": [], 00:11:26.745 "serial_number": "SPDK2", 00:11:26.745 "model_number": "SPDK bdev Controller", 00:11:26.745 "max_namespaces": 32, 00:11:26.745 "min_cntlid": 1, 00:11:26.745 "max_cntlid": 65519, 00:11:26.745 "namespaces": [ 00:11:26.745 { 00:11:26.745 "nsid": 1, 00:11:26.745 "bdev_name": "Malloc2", 00:11:26.745 "name": "Malloc2", 00:11:26.745 "nguid": "D5FB9638FDF045789FF436E5172C1B47", 00:11:26.745 "uuid": "d5fb9638-fdf0-4578-9ff4-36e5172c1b47" 00:11:26.745 } 00:11:26.745 ] 00:11:26.745 } 00:11:26.745 ] 00:11:26.745 00:45:19 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:26.745 00:45:19 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:26.745 00:45:19 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1616115 00:11:26.745 00:45:19 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:26.745 00:45:19 -- common/autotest_common.sh@1251 -- # local i=0 00:11:26.745 00:45:19 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:26.745 00:45:19 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:26.745 00:45:19 -- common/autotest_common.sh@1262 -- # return 0 00:11:26.745 00:45:19 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:26.745 00:45:19 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:26.745 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.004 [2024-04-27 00:45:19.463520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:27.004 Malloc3 00:11:27.004 00:45:19 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:27.004 [2024-04-27 00:45:19.698296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:27.263 00:45:19 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:27.263 Asynchronous Event Request test 00:11:27.263 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:27.263 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:27.263 Registering asynchronous event callbacks... 00:11:27.263 Starting namespace attribute notice tests for all controllers... 00:11:27.263 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:27.263 aer_cb - Changed Namespace 00:11:27.263 Cleaning up... 00:11:27.263 [ 00:11:27.263 { 00:11:27.263 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:27.263 "subtype": "Discovery", 00:11:27.263 "listen_addresses": [], 00:11:27.263 "allow_any_host": true, 00:11:27.263 "hosts": [] 00:11:27.263 }, 00:11:27.263 { 00:11:27.263 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:27.263 "subtype": "NVMe", 00:11:27.263 "listen_addresses": [ 00:11:27.263 { 00:11:27.263 "transport": "VFIOUSER", 00:11:27.263 "trtype": "VFIOUSER", 00:11:27.263 "adrfam": "IPv4", 00:11:27.263 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:27.263 "trsvcid": "0" 00:11:27.263 } 00:11:27.263 ], 00:11:27.263 "allow_any_host": true, 00:11:27.263 "hosts": [], 00:11:27.263 "serial_number": "SPDK1", 00:11:27.263 "model_number": "SPDK bdev Controller", 00:11:27.263 "max_namespaces": 32, 00:11:27.263 "min_cntlid": 1, 00:11:27.263 "max_cntlid": 65519, 00:11:27.263 "namespaces": [ 00:11:27.263 { 00:11:27.263 "nsid": 1, 00:11:27.263 "bdev_name": "Malloc1", 00:11:27.263 "name": "Malloc1", 00:11:27.263 "nguid": "037946CB38564CD58ED09BF3E3502A34", 00:11:27.263 "uuid": "037946cb-3856-4cd5-8ed0-9bf3e3502a34" 00:11:27.263 }, 00:11:27.263 { 00:11:27.263 "nsid": 2, 00:11:27.263 "bdev_name": "Malloc3", 00:11:27.263 "name": "Malloc3", 00:11:27.263 "nguid": "07375C693080441C96BE79D194DA6FC7", 00:11:27.263 "uuid": "07375c69-3080-441c-96be-79d194da6fc7" 00:11:27.263 } 00:11:27.263 ] 00:11:27.263 }, 00:11:27.263 { 00:11:27.263 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:27.263 "subtype": "NVMe", 00:11:27.263 "listen_addresses": [ 00:11:27.263 { 00:11:27.263 "transport": "VFIOUSER", 00:11:27.263 "trtype": "VFIOUSER", 00:11:27.263 "adrfam": "IPv4", 00:11:27.263 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:27.263 "trsvcid": "0" 00:11:27.263 } 00:11:27.263 ], 00:11:27.263 "allow_any_host": true, 00:11:27.263 "hosts": [], 00:11:27.263 "serial_number": "SPDK2", 00:11:27.263 "model_number": "SPDK bdev Controller", 00:11:27.263 "max_namespaces": 32, 00:11:27.263 "min_cntlid": 1, 00:11:27.263 "max_cntlid": 65519, 00:11:27.263 "namespaces": [ 00:11:27.263 { 00:11:27.263 "nsid": 1, 00:11:27.263 "bdev_name": "Malloc2", 00:11:27.263 "name": "Malloc2", 00:11:27.263 "nguid": "D5FB9638FDF045789FF436E5172C1B47", 00:11:27.263 "uuid": "d5fb9638-fdf0-4578-9ff4-36e5172c1b47" 00:11:27.263 } 00:11:27.263 ] 00:11:27.263 } 00:11:27.263 ] 00:11:27.263 00:45:19 -- target/nvmf_vfio_user.sh@44 -- # wait 1616115 00:11:27.263 00:45:19 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:27.263 00:45:19 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:27.263 00:45:19 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:27.263 00:45:19 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:27.263 [2024-04-27 00:45:19.932230] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:11:27.263 [2024-04-27 00:45:19.932260] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1616343 ] 00:11:27.263 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.524 [2024-04-27 00:45:19.961441] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:27.524 [2024-04-27 00:45:19.972977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:27.524 [2024-04-27 00:45:19.972997] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f81ca3e8000 00:11:27.524 [2024-04-27 00:45:19.973974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:27.525 [2024-04-27 00:45:19.974976] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:27.525 [2024-04-27 00:45:19.975985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:27.525 [2024-04-27 00:45:19.976993] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:27.525 [2024-04-27 00:45:19.978008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:27.525 [2024-04-27 00:45:19.979012] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:27.525 [2024-04-27 00:45:19.980018] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:27.525 [2024-04-27 00:45:19.981025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:27.525 [2024-04-27 00:45:19.982031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:27.525 [2024-04-27 00:45:19.982043] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f81ca3dd000 00:11:27.525 [2024-04-27 00:45:19.982982] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:27.525 [2024-04-27 00:45:19.994544] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:27.525 [2024-04-27 00:45:19.994567] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:27.525 [2024-04-27 00:45:19.996613] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:27.525 [2024-04-27 00:45:19.996654] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:27.525 [2024-04-27 00:45:19.996727] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:27.525 [2024-04-27 00:45:19.996744] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:27.525 [2024-04-27 00:45:19.996749] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:27.525 [2024-04-27 00:45:19.997616] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:27.525 [2024-04-27 00:45:19.997625] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:27.525 [2024-04-27 00:45:19.997632] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:27.525 [2024-04-27 00:45:19.998628] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:27.525 [2024-04-27 00:45:19.998636] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:27.525 [2024-04-27 00:45:19.998643] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:27.525 [2024-04-27 00:45:19.999631] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:27.525 [2024-04-27 00:45:19.999640] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:27.525 [2024-04-27 00:45:20.000638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:27.525 [2024-04-27 00:45:20.000646] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:27.525 [2024-04-27 00:45:20.000650] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:27.525 [2024-04-27 00:45:20.000656] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:27.525 [2024-04-27 00:45:20.000762] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:27.525 [2024-04-27 00:45:20.000766] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:27.525 [2024-04-27 00:45:20.000771] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:27.525 [2024-04-27 00:45:20.005076] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:27.525 [2024-04-27 00:45:20.005667] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:27.525 [2024-04-27 00:45:20.006678] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:27.525 [2024-04-27 00:45:20.007678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:27.525 [2024-04-27 00:45:20.007717] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:27.525 [2024-04-27 00:45:20.008688] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:27.525 [2024-04-27 00:45:20.008698] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:27.525 [2024-04-27 00:45:20.008702] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.008720] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:27.525 [2024-04-27 00:45:20.008727] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.008739] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:27.525 [2024-04-27 00:45:20.008744] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:27.525 [2024-04-27 00:45:20.008755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.016078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.016090] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:27.525 [2024-04-27 00:45:20.016095] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:27.525 [2024-04-27 00:45:20.016099] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:27.525 [2024-04-27 00:45:20.016104] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:27.525 [2024-04-27 00:45:20.016109] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:27.525 [2024-04-27 00:45:20.016113] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:27.525 [2024-04-27 00:45:20.016118] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.016125] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.016135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.024078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.024093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.525 [2024-04-27 00:45:20.024102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.525 [2024-04-27 00:45:20.024110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.525 [2024-04-27 00:45:20.024118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.525 [2024-04-27 00:45:20.024123] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.024132] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.024141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.032075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.032086] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:27.525 [2024-04-27 00:45:20.032091] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.032099] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.032104] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.032113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.040075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.040118] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.040126] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.040133] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:27.525 [2024-04-27 00:45:20.040137] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:27.525 [2024-04-27 00:45:20.040143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.048076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.048089] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:27.525 [2024-04-27 00:45:20.048098] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.048105] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.048112] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:27.525 [2024-04-27 00:45:20.048116] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:27.525 [2024-04-27 00:45:20.048121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.056079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.056096] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.056104] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.056111] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:27.525 [2024-04-27 00:45:20.056116] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:27.525 [2024-04-27 00:45:20.056122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.064076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.064090] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.064097] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.064104] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.064110] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.064115] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.064119] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:27.525 [2024-04-27 00:45:20.064123] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:27.525 [2024-04-27 00:45:20.064128] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:27.525 [2024-04-27 00:45:20.064145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.072077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.072090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.080076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.080088] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.088075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.088088] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.096075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:27.525 [2024-04-27 00:45:20.096089] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:27.525 [2024-04-27 00:45:20.096093] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:27.525 [2024-04-27 00:45:20.096097] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:27.525 [2024-04-27 00:45:20.096100] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:27.525 [2024-04-27 00:45:20.096105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:27.525 [2024-04-27 00:45:20.096112] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:27.525 [2024-04-27 00:45:20.096116] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:27.525 [2024-04-27 00:45:20.096121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.096128] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:27.525 [2024-04-27 00:45:20.096132] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:27.525 [2024-04-27 00:45:20.096137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:27.525 [2024-04-27 00:45:20.096146] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:27.525 [2024-04-27 00:45:20.096150] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:27.525 [2024-04-27 00:45:20.096156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:27.526 [2024-04-27 00:45:20.104074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:27.526 [2024-04-27 00:45:20.104090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:27.526 [2024-04-27 00:45:20.104099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:27.526 [2024-04-27 00:45:20.104105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:27.526 ===================================================== 00:11:27.526 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:27.526 ===================================================== 00:11:27.526 Controller Capabilities/Features 00:11:27.526 ================================ 00:11:27.526 Vendor ID: 4e58 00:11:27.526 Subsystem Vendor ID: 4e58 00:11:27.526 Serial Number: SPDK2 00:11:27.526 Model Number: SPDK bdev Controller 00:11:27.526 Firmware Version: 24.05 00:11:27.526 Recommended Arb Burst: 6 00:11:27.526 IEEE OUI Identifier: 8d 6b 50 00:11:27.526 Multi-path I/O 00:11:27.526 May have multiple subsystem ports: Yes 00:11:27.526 May have multiple controllers: Yes 00:11:27.526 Associated with SR-IOV VF: No 00:11:27.526 Max Data Transfer Size: 131072 00:11:27.526 Max Number of Namespaces: 32 00:11:27.526 Max Number of I/O Queues: 127 00:11:27.526 NVMe Specification Version (VS): 1.3 00:11:27.526 NVMe Specification Version (Identify): 1.3 00:11:27.526 Maximum Queue Entries: 256 00:11:27.526 Contiguous Queues Required: Yes 00:11:27.526 Arbitration Mechanisms Supported 00:11:27.526 Weighted Round Robin: Not Supported 00:11:27.526 Vendor Specific: Not Supported 00:11:27.526 Reset Timeout: 15000 ms 00:11:27.526 Doorbell Stride: 4 bytes 00:11:27.526 NVM Subsystem Reset: Not Supported 00:11:27.526 Command Sets Supported 00:11:27.526 NVM Command Set: Supported 00:11:27.526 Boot Partition: Not Supported 00:11:27.526 Memory Page Size Minimum: 4096 bytes 00:11:27.526 Memory Page Size Maximum: 4096 bytes 00:11:27.526 Persistent Memory Region: Not Supported 00:11:27.526 Optional Asynchronous Events Supported 00:11:27.526 Namespace Attribute Notices: Supported 00:11:27.526 Firmware Activation Notices: Not Supported 00:11:27.526 ANA Change Notices: Not Supported 00:11:27.526 PLE Aggregate Log Change Notices: Not Supported 00:11:27.526 LBA Status Info Alert Notices: Not Supported 00:11:27.526 EGE Aggregate Log Change Notices: Not Supported 00:11:27.526 Normal NVM Subsystem Shutdown event: Not Supported 00:11:27.526 Zone Descriptor Change Notices: Not Supported 00:11:27.526 Discovery Log Change Notices: Not Supported 00:11:27.526 Controller Attributes 00:11:27.526 128-bit Host Identifier: Supported 00:11:27.526 Non-Operational Permissive Mode: Not Supported 00:11:27.526 NVM Sets: Not Supported 00:11:27.526 Read Recovery Levels: Not Supported 00:11:27.526 Endurance Groups: Not Supported 00:11:27.526 Predictable Latency Mode: Not Supported 00:11:27.526 Traffic Based Keep ALive: Not Supported 00:11:27.526 Namespace Granularity: Not Supported 00:11:27.526 SQ Associations: Not Supported 00:11:27.526 UUID List: Not Supported 00:11:27.526 Multi-Domain Subsystem: Not Supported 00:11:27.526 Fixed Capacity Management: Not Supported 00:11:27.526 Variable Capacity Management: Not Supported 00:11:27.526 Delete Endurance Group: Not Supported 00:11:27.526 Delete NVM Set: Not Supported 00:11:27.526 Extended LBA Formats Supported: Not Supported 00:11:27.526 Flexible Data Placement Supported: Not Supported 00:11:27.526 00:11:27.526 Controller Memory Buffer Support 00:11:27.526 ================================ 00:11:27.526 Supported: No 00:11:27.526 00:11:27.526 Persistent Memory Region Support 00:11:27.526 ================================ 00:11:27.526 Supported: No 00:11:27.526 00:11:27.526 Admin Command Set Attributes 00:11:27.526 ============================ 00:11:27.526 Security Send/Receive: Not Supported 00:11:27.526 Format NVM: Not Supported 00:11:27.526 Firmware Activate/Download: Not Supported 00:11:27.526 Namespace Management: Not Supported 00:11:27.526 Device Self-Test: Not Supported 00:11:27.526 Directives: Not Supported 00:11:27.526 NVMe-MI: Not Supported 00:11:27.526 Virtualization Management: Not Supported 00:11:27.526 Doorbell Buffer Config: Not Supported 00:11:27.526 Get LBA Status Capability: Not Supported 00:11:27.526 Command & Feature Lockdown Capability: Not Supported 00:11:27.526 Abort Command Limit: 4 00:11:27.526 Async Event Request Limit: 4 00:11:27.526 Number of Firmware Slots: N/A 00:11:27.526 Firmware Slot 1 Read-Only: N/A 00:11:27.526 Firmware Activation Without Reset: N/A 00:11:27.526 Multiple Update Detection Support: N/A 00:11:27.526 Firmware Update Granularity: No Information Provided 00:11:27.526 Per-Namespace SMART Log: No 00:11:27.526 Asymmetric Namespace Access Log Page: Not Supported 00:11:27.526 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:27.526 Command Effects Log Page: Supported 00:11:27.526 Get Log Page Extended Data: Supported 00:11:27.526 Telemetry Log Pages: Not Supported 00:11:27.526 Persistent Event Log Pages: Not Supported 00:11:27.526 Supported Log Pages Log Page: May Support 00:11:27.526 Commands Supported & Effects Log Page: Not Supported 00:11:27.526 Feature Identifiers & Effects Log Page:May Support 00:11:27.526 NVMe-MI Commands & Effects Log Page: May Support 00:11:27.526 Data Area 4 for Telemetry Log: Not Supported 00:11:27.526 Error Log Page Entries Supported: 128 00:11:27.526 Keep Alive: Supported 00:11:27.526 Keep Alive Granularity: 10000 ms 00:11:27.526 00:11:27.526 NVM Command Set Attributes 00:11:27.526 ========================== 00:11:27.526 Submission Queue Entry Size 00:11:27.526 Max: 64 00:11:27.526 Min: 64 00:11:27.526 Completion Queue Entry Size 00:11:27.526 Max: 16 00:11:27.526 Min: 16 00:11:27.526 Number of Namespaces: 32 00:11:27.526 Compare Command: Supported 00:11:27.526 Write Uncorrectable Command: Not Supported 00:11:27.526 Dataset Management Command: Supported 00:11:27.526 Write Zeroes Command: Supported 00:11:27.526 Set Features Save Field: Not Supported 00:11:27.526 Reservations: Not Supported 00:11:27.526 Timestamp: Not Supported 00:11:27.526 Copy: Supported 00:11:27.526 Volatile Write Cache: Present 00:11:27.526 Atomic Write Unit (Normal): 1 00:11:27.526 Atomic Write Unit (PFail): 1 00:11:27.526 Atomic Compare & Write Unit: 1 00:11:27.526 Fused Compare & Write: Supported 00:11:27.526 Scatter-Gather List 00:11:27.526 SGL Command Set: Supported (Dword aligned) 00:11:27.526 SGL Keyed: Not Supported 00:11:27.526 SGL Bit Bucket Descriptor: Not Supported 00:11:27.526 SGL Metadata Pointer: Not Supported 00:11:27.526 Oversized SGL: Not Supported 00:11:27.526 SGL Metadata Address: Not Supported 00:11:27.526 SGL Offset: Not Supported 00:11:27.526 Transport SGL Data Block: Not Supported 00:11:27.526 Replay Protected Memory Block: Not Supported 00:11:27.526 00:11:27.526 Firmware Slot Information 00:11:27.526 ========================= 00:11:27.526 Active slot: 1 00:11:27.526 Slot 1 Firmware Revision: 24.05 00:11:27.526 00:11:27.526 00:11:27.526 Commands Supported and Effects 00:11:27.526 ============================== 00:11:27.526 Admin Commands 00:11:27.526 -------------- 00:11:27.526 Get Log Page (02h): Supported 00:11:27.526 Identify (06h): Supported 00:11:27.526 Abort (08h): Supported 00:11:27.526 Set Features (09h): Supported 00:11:27.526 Get Features (0Ah): Supported 00:11:27.526 Asynchronous Event Request (0Ch): Supported 00:11:27.526 Keep Alive (18h): Supported 00:11:27.526 I/O Commands 00:11:27.526 ------------ 00:11:27.526 Flush (00h): Supported LBA-Change 00:11:27.526 Write (01h): Supported LBA-Change 00:11:27.526 Read (02h): Supported 00:11:27.526 Compare (05h): Supported 00:11:27.526 Write Zeroes (08h): Supported LBA-Change 00:11:27.526 Dataset Management (09h): Supported LBA-Change 00:11:27.526 Copy (19h): Supported LBA-Change 00:11:27.526 Unknown (79h): Supported LBA-Change 00:11:27.526 Unknown (7Ah): Supported 00:11:27.526 00:11:27.526 Error Log 00:11:27.526 ========= 00:11:27.526 00:11:27.526 Arbitration 00:11:27.526 =========== 00:11:27.526 Arbitration Burst: 1 00:11:27.526 00:11:27.526 Power Management 00:11:27.526 ================ 00:11:27.526 Number of Power States: 1 00:11:27.526 Current Power State: Power State #0 00:11:27.526 Power State #0: 00:11:27.526 Max Power: 0.00 W 00:11:27.526 Non-Operational State: Operational 00:11:27.526 Entry Latency: Not Reported 00:11:27.526 Exit Latency: Not Reported 00:11:27.526 Relative Read Throughput: 0 00:11:27.526 Relative Read Latency: 0 00:11:27.526 Relative Write Throughput: 0 00:11:27.526 Relative Write Latency: 0 00:11:27.526 Idle Power: Not Reported 00:11:27.526 Active Power: Not Reported 00:11:27.526 Non-Operational Permissive Mode: Not Supported 00:11:27.526 00:11:27.526 Health Information 00:11:27.526 ================== 00:11:27.526 Critical Warnings: 00:11:27.526 Available Spare Space: OK 00:11:27.526 Temperature: OK 00:11:27.526 Device Reliability: OK 00:11:27.526 Read Only: No 00:11:27.526 Volatile Memory Backup: OK 00:11:27.526 Current Temperature: 0 Kelvin (-2[2024-04-27 00:45:20.104202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:27.526 [2024-04-27 00:45:20.112075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:27.526 [2024-04-27 00:45:20.112104] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:27.526 [2024-04-27 00:45:20.112113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.526 [2024-04-27 00:45:20.112119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.526 [2024-04-27 00:45:20.112124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.526 [2024-04-27 00:45:20.112130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.526 [2024-04-27 00:45:20.112175] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:27.526 [2024-04-27 00:45:20.112187] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:27.526 [2024-04-27 00:45:20.113185] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:27.526 [2024-04-27 00:45:20.113232] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:27.526 [2024-04-27 00:45:20.113239] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:27.526 [2024-04-27 00:45:20.114184] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:27.526 [2024-04-27 00:45:20.114196] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:27.526 [2024-04-27 00:45:20.114329] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:27.526 [2024-04-27 00:45:20.115395] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:27.526 73 Celsius) 00:11:27.526 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:27.526 Available Spare: 0% 00:11:27.526 Available Spare Threshold: 0% 00:11:27.526 Life Percentage Used: 0% 00:11:27.526 Data Units Read: 0 00:11:27.526 Data Units Written: 0 00:11:27.526 Host Read Commands: 0 00:11:27.526 Host Write Commands: 0 00:11:27.526 Controller Busy Time: 0 minutes 00:11:27.526 Power Cycles: 0 00:11:27.526 Power On Hours: 0 hours 00:11:27.526 Unsafe Shutdowns: 0 00:11:27.526 Unrecoverable Media Errors: 0 00:11:27.526 Lifetime Error Log Entries: 0 00:11:27.527 Warning Temperature Time: 0 minutes 00:11:27.527 Critical Temperature Time: 0 minutes 00:11:27.527 00:11:27.527 Number of Queues 00:11:27.527 ================ 00:11:27.527 Number of I/O Submission Queues: 127 00:11:27.527 Number of I/O Completion Queues: 127 00:11:27.527 00:11:27.527 Active Namespaces 00:11:27.527 ================= 00:11:27.527 Namespace ID:1 00:11:27.527 Error Recovery Timeout: Unlimited 00:11:27.527 Command Set Identifier: NVM (00h) 00:11:27.527 Deallocate: Supported 00:11:27.527 Deallocated/Unwritten Error: Not Supported 00:11:27.527 Deallocated Read Value: Unknown 00:11:27.527 Deallocate in Write Zeroes: Not Supported 00:11:27.527 Deallocated Guard Field: 0xFFFF 00:11:27.527 Flush: Supported 00:11:27.527 Reservation: Supported 00:11:27.527 Namespace Sharing Capabilities: Multiple Controllers 00:11:27.527 Size (in LBAs): 131072 (0GiB) 00:11:27.527 Capacity (in LBAs): 131072 (0GiB) 00:11:27.527 Utilization (in LBAs): 131072 (0GiB) 00:11:27.527 NGUID: D5FB9638FDF045789FF436E5172C1B47 00:11:27.527 UUID: d5fb9638-fdf0-4578-9ff4-36e5172c1b47 00:11:27.527 Thin Provisioning: Not Supported 00:11:27.527 Per-NS Atomic Units: Yes 00:11:27.527 Atomic Boundary Size (Normal): 0 00:11:27.527 Atomic Boundary Size (PFail): 0 00:11:27.527 Atomic Boundary Offset: 0 00:11:27.527 Maximum Single Source Range Length: 65535 00:11:27.527 Maximum Copy Length: 65535 00:11:27.527 Maximum Source Range Count: 1 00:11:27.527 NGUID/EUI64 Never Reused: No 00:11:27.527 Namespace Write Protected: No 00:11:27.527 Number of LBA Formats: 1 00:11:27.527 Current LBA Format: LBA Format #00 00:11:27.527 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:27.527 00:11:27.527 00:45:20 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:27.527 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.786 [2024-04-27 00:45:20.322384] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:33.082 [2024-04-27 00:45:25.429304] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:33.082 Initializing NVMe Controllers 00:11:33.082 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:33.082 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:33.082 Initialization complete. Launching workers. 00:11:33.082 ======================================================== 00:11:33.082 Latency(us) 00:11:33.083 Device Information : IOPS MiB/s Average min max 00:11:33.083 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39823.74 155.56 3213.75 988.87 10617.91 00:11:33.083 ======================================================== 00:11:33.083 Total : 39823.74 155.56 3213.75 988.87 10617.91 00:11:33.083 00:11:33.083 00:45:25 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:33.083 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.083 [2024-04-27 00:45:25.643970] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:38.353 [2024-04-27 00:45:30.664680] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:38.353 Initializing NVMe Controllers 00:11:38.353 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:38.353 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:38.353 Initialization complete. Launching workers. 00:11:38.353 ======================================================== 00:11:38.353 Latency(us) 00:11:38.353 Device Information : IOPS MiB/s Average min max 00:11:38.353 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39899.12 155.86 3207.89 1001.88 8595.90 00:11:38.353 ======================================================== 00:11:38.353 Total : 39899.12 155.86 3207.89 1001.88 8595.90 00:11:38.353 00:11:38.353 00:45:30 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:38.353 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.353 [2024-04-27 00:45:30.850099] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:43.625 [2024-04-27 00:45:36.000167] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:43.625 Initializing NVMe Controllers 00:11:43.625 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:43.625 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:43.625 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:43.625 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:43.625 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:43.625 Initialization complete. Launching workers. 00:11:43.625 Starting thread on core 2 00:11:43.625 Starting thread on core 3 00:11:43.625 Starting thread on core 1 00:11:43.625 00:45:36 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:43.625 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.625 [2024-04-27 00:45:36.287494] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:46.913 [2024-04-27 00:45:39.354081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:46.913 Initializing NVMe Controllers 00:11:46.913 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:46.913 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:46.913 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:46.913 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:46.913 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:46.913 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:46.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:46.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:46.913 Initialization complete. Launching workers. 00:11:46.913 Starting thread on core 1 with urgent priority queue 00:11:46.913 Starting thread on core 2 with urgent priority queue 00:11:46.913 Starting thread on core 3 with urgent priority queue 00:11:46.913 Starting thread on core 0 with urgent priority queue 00:11:46.913 SPDK bdev Controller (SPDK2 ) core 0: 7006.67 IO/s 14.27 secs/100000 ios 00:11:46.913 SPDK bdev Controller (SPDK2 ) core 1: 7779.00 IO/s 12.86 secs/100000 ios 00:11:46.913 SPDK bdev Controller (SPDK2 ) core 2: 7906.33 IO/s 12.65 secs/100000 ios 00:11:46.913 SPDK bdev Controller (SPDK2 ) core 3: 10994.00 IO/s 9.10 secs/100000 ios 00:11:46.913 ======================================================== 00:11:46.913 00:11:46.913 00:45:39 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:46.913 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.172 [2024-04-27 00:45:39.620098] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:47.172 [2024-04-27 00:45:39.632180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:47.172 Initializing NVMe Controllers 00:11:47.172 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:47.172 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:47.172 Namespace ID: 1 size: 0GB 00:11:47.172 Initialization complete. 00:11:47.172 INFO: using host memory buffer for IO 00:11:47.172 Hello world! 00:11:47.172 00:45:39 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:47.172 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.432 [2024-04-27 00:45:39.897972] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:48.441 Initializing NVMe Controllers 00:11:48.441 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:48.441 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:48.441 Initialization complete. Launching workers. 00:11:48.441 submit (in ns) avg, min, max = 6300.9, 3240.9, 4001010.4 00:11:48.441 complete (in ns) avg, min, max = 19968.2, 1785.2, 4993851.3 00:11:48.441 00:11:48.441 Submit histogram 00:11:48.441 ================ 00:11:48.441 Range in us Cumulative Count 00:11:48.442 3.228 - 3.242: 0.0060% ( 1) 00:11:48.442 3.242 - 3.256: 0.1268% ( 20) 00:11:48.442 3.256 - 3.270: 0.6400% ( 85) 00:11:48.442 3.270 - 3.283: 2.8799% ( 371) 00:11:48.442 3.283 - 3.297: 6.5749% ( 612) 00:11:48.442 3.297 - 3.311: 10.9400% ( 723) 00:11:48.442 3.311 - 3.325: 15.8848% ( 819) 00:11:48.442 3.325 - 3.339: 22.1216% ( 1033) 00:11:48.442 3.339 - 3.353: 27.3139% ( 860) 00:11:48.442 3.353 - 3.367: 33.0496% ( 950) 00:11:48.442 3.367 - 3.381: 38.5920% ( 918) 00:11:48.442 3.381 - 3.395: 42.9270% ( 718) 00:11:48.442 3.395 - 3.409: 47.0084% ( 676) 00:11:48.442 3.409 - 3.423: 51.6150% ( 763) 00:11:48.442 3.423 - 3.437: 57.6586% ( 1001) 00:11:48.442 3.437 - 3.450: 62.7362% ( 841) 00:11:48.442 3.450 - 3.464: 67.5844% ( 803) 00:11:48.442 3.464 - 3.478: 73.0785% ( 910) 00:11:48.442 3.478 - 3.492: 76.9969% ( 649) 00:11:48.442 3.492 - 3.506: 80.2330% ( 536) 00:11:48.442 3.506 - 3.520: 82.6239% ( 396) 00:11:48.442 3.520 - 3.534: 84.4171% ( 297) 00:11:48.442 3.534 - 3.548: 85.4193% ( 166) 00:11:48.442 3.548 - 3.562: 86.0472% ( 104) 00:11:48.442 3.562 - 3.590: 87.3936% ( 223) 00:11:48.442 3.590 - 3.617: 89.1083% ( 284) 00:11:48.442 3.617 - 3.645: 90.8169% ( 283) 00:11:48.442 3.645 - 3.673: 92.4229% ( 266) 00:11:48.442 3.673 - 3.701: 94.3247% ( 315) 00:11:48.442 3.701 - 3.729: 95.9428% ( 268) 00:11:48.442 3.729 - 3.757: 97.2891% ( 223) 00:11:48.442 3.757 - 3.784: 98.2733% ( 163) 00:11:48.442 3.784 - 3.812: 98.7321% ( 76) 00:11:48.442 3.812 - 3.840: 99.1427% ( 68) 00:11:48.442 3.840 - 3.868: 99.3902% ( 41) 00:11:48.442 3.868 - 3.896: 99.4566% ( 11) 00:11:48.442 3.896 - 3.923: 99.5351% ( 13) 00:11:48.442 3.923 - 3.951: 99.5532% ( 3) 00:11:48.442 3.951 - 3.979: 99.5774% ( 4) 00:11:48.442 3.979 - 4.007: 99.6015% ( 4) 00:11:48.442 4.007 - 4.035: 99.6136% ( 2) 00:11:48.442 4.035 - 4.063: 99.6377% ( 4) 00:11:48.442 4.063 - 4.090: 99.6438% ( 1) 00:11:48.442 4.090 - 4.118: 99.6619% ( 3) 00:11:48.442 4.146 - 4.174: 99.6800% ( 3) 00:11:48.442 4.174 - 4.202: 99.6860% ( 1) 00:11:48.442 4.257 - 4.285: 99.6921% ( 1) 00:11:48.442 4.397 - 4.424: 99.6981% ( 1) 00:11:48.442 4.424 - 4.452: 99.7042% ( 1) 00:11:48.442 4.703 - 4.730: 99.7102% ( 1) 00:11:48.442 4.842 - 4.870: 99.7162% ( 1) 00:11:48.442 5.009 - 5.037: 99.7223% ( 1) 00:11:48.442 5.037 - 5.064: 99.7283% ( 1) 00:11:48.442 5.259 - 5.287: 99.7343% ( 1) 00:11:48.442 5.287 - 5.315: 99.7404% ( 1) 00:11:48.442 5.426 - 5.454: 99.7464% ( 1) 00:11:48.442 5.454 - 5.482: 99.7525% ( 1) 00:11:48.442 5.537 - 5.565: 99.7585% ( 1) 00:11:48.442 5.565 - 5.593: 99.7645% ( 1) 00:11:48.442 5.621 - 5.649: 99.7706% ( 1) 00:11:48.442 5.704 - 5.732: 99.7766% ( 1) 00:11:48.442 5.732 - 5.760: 99.7826% ( 1) 00:11:48.442 5.760 - 5.788: 99.7947% ( 2) 00:11:48.442 5.983 - 6.010: 99.8008% ( 1) 00:11:48.442 6.066 - 6.094: 99.8068% ( 1) 00:11:48.442 6.122 - 6.150: 99.8189% ( 2) 00:11:48.442 6.261 - 6.289: 99.8249% ( 1) 00:11:48.442 6.289 - 6.317: 99.8309% ( 1) 00:11:48.442 6.372 - 6.400: 99.8370% ( 1) 00:11:48.442 6.428 - 6.456: 99.8491% ( 2) 00:11:48.442 6.483 - 6.511: 99.8551% ( 1) 00:11:48.442 6.511 - 6.539: 99.8611% ( 1) 00:11:48.442 6.567 - 6.595: 99.8672% ( 1) 00:11:48.442 6.595 - 6.623: 99.8732% ( 1) 00:11:48.442 6.762 - 6.790: 99.8792% ( 1) 00:11:48.442 6.929 - 6.957: 99.8853% ( 1) 00:11:48.442 7.346 - 7.402: 99.8974% ( 2) 00:11:48.442 7.457 - 7.513: 99.9034% ( 1) 00:11:48.442 7.847 - 7.903: 99.9094% ( 1) 00:11:48.442 [2024-04-27 00:45:40.990162] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:48.442 7.903 - 7.958: 99.9155% ( 1) 00:11:48.442 8.070 - 8.125: 99.9215% ( 1) 00:11:48.442 9.071 - 9.127: 99.9275% ( 1) 00:11:48.442 3376.529 - 3390.776: 99.9336% ( 1) 00:11:48.442 3989.148 - 4017.642: 100.0000% ( 11) 00:11:48.442 00:11:48.442 Complete histogram 00:11:48.442 ================== 00:11:48.442 Range in us Cumulative Count 00:11:48.442 1.781 - 1.795: 2.6927% ( 446) 00:11:48.442 1.795 - 1.809: 38.3566% ( 5907) 00:11:48.442 1.809 - 1.823: 50.8422% ( 2068) 00:11:48.442 1.823 - 1.837: 54.4346% ( 595) 00:11:48.442 1.837 - 1.850: 59.5182% ( 842) 00:11:48.442 1.850 - 1.864: 85.9083% ( 4371) 00:11:48.442 1.864 - 1.878: 94.4515% ( 1415) 00:11:48.442 1.878 - 1.892: 96.3775% ( 319) 00:11:48.442 1.892 - 1.906: 97.0899% ( 118) 00:11:48.442 1.906 - 1.920: 97.3797% ( 48) 00:11:48.442 1.920 - 1.934: 97.8204% ( 73) 00:11:48.442 1.934 - 1.948: 98.1887% ( 61) 00:11:48.442 1.948 - 1.962: 98.3216% ( 22) 00:11:48.442 1.962 - 1.976: 98.3759% ( 9) 00:11:48.442 1.976 - 1.990: 98.5933% ( 36) 00:11:48.442 1.990 - 2.003: 98.8529% ( 43) 00:11:48.442 2.003 - 2.017: 98.9132% ( 10) 00:11:48.442 2.017 - 2.031: 98.9917% ( 13) 00:11:48.442 2.031 - 2.045: 99.0702% ( 13) 00:11:48.442 2.045 - 2.059: 99.1185% ( 8) 00:11:48.442 2.059 - 2.073: 99.1427% ( 4) 00:11:48.442 2.073 - 2.087: 99.1547% ( 2) 00:11:48.442 2.087 - 2.101: 99.1608% ( 1) 00:11:48.442 2.101 - 2.115: 99.1729% ( 2) 00:11:48.442 2.143 - 2.157: 99.1789% ( 1) 00:11:48.442 2.157 - 2.170: 99.1849% ( 1) 00:11:48.442 2.170 - 2.184: 99.1910% ( 1) 00:11:48.442 2.184 - 2.198: 99.2030% ( 2) 00:11:48.442 2.212 - 2.226: 99.2212% ( 3) 00:11:48.442 2.226 - 2.240: 99.2332% ( 2) 00:11:48.442 2.240 - 2.254: 99.2513% ( 3) 00:11:48.442 2.268 - 2.282: 99.2695% ( 3) 00:11:48.442 2.296 - 2.310: 99.2936% ( 4) 00:11:48.442 2.323 - 2.337: 99.2996% ( 1) 00:11:48.442 2.379 - 2.393: 99.3057% ( 1) 00:11:48.442 2.393 - 2.407: 99.3117% ( 1) 00:11:48.442 2.407 - 2.421: 99.3178% ( 1) 00:11:48.442 2.449 - 2.463: 99.3238% ( 1) 00:11:48.442 2.477 - 2.490: 99.3298% ( 1) 00:11:48.442 2.532 - 2.546: 99.3359% ( 1) 00:11:48.442 2.643 - 2.657: 99.3419% ( 1) 00:11:48.442 3.534 - 3.548: 99.3479% ( 1) 00:11:48.442 3.617 - 3.645: 99.3540% ( 1) 00:11:48.442 3.784 - 3.812: 99.3600% ( 1) 00:11:48.442 3.840 - 3.868: 99.3661% ( 1) 00:11:48.442 3.951 - 3.979: 99.3721% ( 1) 00:11:48.442 4.257 - 4.285: 99.3781% ( 1) 00:11:48.442 4.452 - 4.480: 99.3842% ( 1) 00:11:48.442 4.675 - 4.703: 99.4023% ( 3) 00:11:48.442 4.842 - 4.870: 99.4083% ( 1) 00:11:48.442 4.981 - 5.009: 99.4144% ( 1) 00:11:48.442 5.064 - 5.092: 99.4204% ( 1) 00:11:48.442 5.120 - 5.148: 99.4264% ( 1) 00:11:48.442 5.176 - 5.203: 99.4325% ( 1) 00:11:48.442 5.259 - 5.287: 99.4385% ( 1) 00:11:48.442 5.287 - 5.315: 99.4445% ( 1) 00:11:48.442 5.398 - 5.426: 99.4506% ( 1) 00:11:48.442 5.454 - 5.482: 99.4566% ( 1) 00:11:48.442 5.816 - 5.843: 99.4627% ( 1) 00:11:48.442 5.843 - 5.871: 99.4687% ( 1) 00:11:48.442 5.899 - 5.927: 99.4747% ( 1) 00:11:48.442 6.400 - 6.428: 99.4808% ( 1) 00:11:48.442 6.428 - 6.456: 99.4868% ( 1) 00:11:48.442 6.595 - 6.623: 99.4928% ( 1) 00:11:48.442 6.734 - 6.762: 99.4989% ( 1) 00:11:48.442 6.817 - 6.845: 99.5110% ( 2) 00:11:48.442 7.235 - 7.290: 99.5170% ( 1) 00:11:48.442 9.294 - 9.350: 99.5230% ( 1) 00:11:48.442 12.577 - 12.633: 99.5291% ( 1) 00:11:48.442 16.250 - 16.362: 99.5351% ( 1) 00:11:48.442 27.492 - 27.603: 99.5411% ( 1) 00:11:48.442 153.155 - 154.045: 99.5472% ( 1) 00:11:48.442 3462.010 - 3476.257: 99.5532% ( 1) 00:11:48.442 3989.148 - 4017.642: 99.9940% ( 73) 00:11:48.442 4986.435 - 5014.929: 100.0000% ( 1) 00:11:48.442 00:11:48.442 00:45:41 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:48.442 00:45:41 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:48.442 00:45:41 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:48.442 00:45:41 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:48.442 00:45:41 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:48.702 [ 00:11:48.702 { 00:11:48.702 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:48.702 "subtype": "Discovery", 00:11:48.702 "listen_addresses": [], 00:11:48.702 "allow_any_host": true, 00:11:48.702 "hosts": [] 00:11:48.702 }, 00:11:48.702 { 00:11:48.702 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:48.702 "subtype": "NVMe", 00:11:48.702 "listen_addresses": [ 00:11:48.702 { 00:11:48.702 "transport": "VFIOUSER", 00:11:48.702 "trtype": "VFIOUSER", 00:11:48.702 "adrfam": "IPv4", 00:11:48.702 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:48.702 "trsvcid": "0" 00:11:48.702 } 00:11:48.702 ], 00:11:48.702 "allow_any_host": true, 00:11:48.702 "hosts": [], 00:11:48.702 "serial_number": "SPDK1", 00:11:48.702 "model_number": "SPDK bdev Controller", 00:11:48.702 "max_namespaces": 32, 00:11:48.702 "min_cntlid": 1, 00:11:48.702 "max_cntlid": 65519, 00:11:48.702 "namespaces": [ 00:11:48.702 { 00:11:48.702 "nsid": 1, 00:11:48.702 "bdev_name": "Malloc1", 00:11:48.702 "name": "Malloc1", 00:11:48.702 "nguid": "037946CB38564CD58ED09BF3E3502A34", 00:11:48.702 "uuid": "037946cb-3856-4cd5-8ed0-9bf3e3502a34" 00:11:48.702 }, 00:11:48.702 { 00:11:48.702 "nsid": 2, 00:11:48.702 "bdev_name": "Malloc3", 00:11:48.702 "name": "Malloc3", 00:11:48.702 "nguid": "07375C693080441C96BE79D194DA6FC7", 00:11:48.702 "uuid": "07375c69-3080-441c-96be-79d194da6fc7" 00:11:48.702 } 00:11:48.702 ] 00:11:48.702 }, 00:11:48.702 { 00:11:48.702 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:48.702 "subtype": "NVMe", 00:11:48.702 "listen_addresses": [ 00:11:48.702 { 00:11:48.702 "transport": "VFIOUSER", 00:11:48.702 "trtype": "VFIOUSER", 00:11:48.702 "adrfam": "IPv4", 00:11:48.702 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:48.702 "trsvcid": "0" 00:11:48.702 } 00:11:48.702 ], 00:11:48.702 "allow_any_host": true, 00:11:48.702 "hosts": [], 00:11:48.702 "serial_number": "SPDK2", 00:11:48.702 "model_number": "SPDK bdev Controller", 00:11:48.702 "max_namespaces": 32, 00:11:48.702 "min_cntlid": 1, 00:11:48.702 "max_cntlid": 65519, 00:11:48.702 "namespaces": [ 00:11:48.702 { 00:11:48.702 "nsid": 1, 00:11:48.702 "bdev_name": "Malloc2", 00:11:48.702 "name": "Malloc2", 00:11:48.702 "nguid": "D5FB9638FDF045789FF436E5172C1B47", 00:11:48.702 "uuid": "d5fb9638-fdf0-4578-9ff4-36e5172c1b47" 00:11:48.702 } 00:11:48.702 ] 00:11:48.702 } 00:11:48.702 ] 00:11:48.702 00:45:41 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:48.702 00:45:41 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1619802 00:11:48.702 00:45:41 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:48.702 00:45:41 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:48.702 00:45:41 -- common/autotest_common.sh@1251 -- # local i=0 00:11:48.702 00:45:41 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:48.702 00:45:41 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:48.702 00:45:41 -- common/autotest_common.sh@1262 -- # return 0 00:11:48.702 00:45:41 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:48.702 00:45:41 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:48.702 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.702 [2024-04-27 00:45:41.353458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:48.702 Malloc4 00:11:48.961 00:45:41 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:48.961 [2024-04-27 00:45:41.554998] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:48.961 00:45:41 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:48.961 Asynchronous Event Request test 00:11:48.961 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:48.961 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:48.961 Registering asynchronous event callbacks... 00:11:48.961 Starting namespace attribute notice tests for all controllers... 00:11:48.961 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:48.961 aer_cb - Changed Namespace 00:11:48.961 Cleaning up... 00:11:49.220 [ 00:11:49.220 { 00:11:49.220 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:49.220 "subtype": "Discovery", 00:11:49.220 "listen_addresses": [], 00:11:49.220 "allow_any_host": true, 00:11:49.220 "hosts": [] 00:11:49.220 }, 00:11:49.220 { 00:11:49.220 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:49.220 "subtype": "NVMe", 00:11:49.220 "listen_addresses": [ 00:11:49.220 { 00:11:49.220 "transport": "VFIOUSER", 00:11:49.220 "trtype": "VFIOUSER", 00:11:49.220 "adrfam": "IPv4", 00:11:49.220 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:49.220 "trsvcid": "0" 00:11:49.220 } 00:11:49.220 ], 00:11:49.220 "allow_any_host": true, 00:11:49.220 "hosts": [], 00:11:49.220 "serial_number": "SPDK1", 00:11:49.220 "model_number": "SPDK bdev Controller", 00:11:49.220 "max_namespaces": 32, 00:11:49.220 "min_cntlid": 1, 00:11:49.220 "max_cntlid": 65519, 00:11:49.220 "namespaces": [ 00:11:49.220 { 00:11:49.220 "nsid": 1, 00:11:49.220 "bdev_name": "Malloc1", 00:11:49.220 "name": "Malloc1", 00:11:49.220 "nguid": "037946CB38564CD58ED09BF3E3502A34", 00:11:49.220 "uuid": "037946cb-3856-4cd5-8ed0-9bf3e3502a34" 00:11:49.220 }, 00:11:49.220 { 00:11:49.220 "nsid": 2, 00:11:49.220 "bdev_name": "Malloc3", 00:11:49.220 "name": "Malloc3", 00:11:49.220 "nguid": "07375C693080441C96BE79D194DA6FC7", 00:11:49.220 "uuid": "07375c69-3080-441c-96be-79d194da6fc7" 00:11:49.220 } 00:11:49.220 ] 00:11:49.220 }, 00:11:49.220 { 00:11:49.220 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:49.220 "subtype": "NVMe", 00:11:49.220 "listen_addresses": [ 00:11:49.220 { 00:11:49.220 "transport": "VFIOUSER", 00:11:49.220 "trtype": "VFIOUSER", 00:11:49.220 "adrfam": "IPv4", 00:11:49.220 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:49.220 "trsvcid": "0" 00:11:49.220 } 00:11:49.220 ], 00:11:49.220 "allow_any_host": true, 00:11:49.220 "hosts": [], 00:11:49.220 "serial_number": "SPDK2", 00:11:49.220 "model_number": "SPDK bdev Controller", 00:11:49.220 "max_namespaces": 32, 00:11:49.220 "min_cntlid": 1, 00:11:49.220 "max_cntlid": 65519, 00:11:49.220 "namespaces": [ 00:11:49.220 { 00:11:49.220 "nsid": 1, 00:11:49.220 "bdev_name": "Malloc2", 00:11:49.220 "name": "Malloc2", 00:11:49.220 "nguid": "D5FB9638FDF045789FF436E5172C1B47", 00:11:49.220 "uuid": "d5fb9638-fdf0-4578-9ff4-36e5172c1b47" 00:11:49.220 }, 00:11:49.220 { 00:11:49.220 "nsid": 2, 00:11:49.220 "bdev_name": "Malloc4", 00:11:49.220 "name": "Malloc4", 00:11:49.220 "nguid": "D3AF89A05ED84C74B878F16387C458A4", 00:11:49.220 "uuid": "d3af89a0-5ed8-4c74-b878-f16387c458a4" 00:11:49.220 } 00:11:49.220 ] 00:11:49.220 } 00:11:49.220 ] 00:11:49.220 00:45:41 -- target/nvmf_vfio_user.sh@44 -- # wait 1619802 00:11:49.220 00:45:41 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:49.220 00:45:41 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1611636 00:11:49.220 00:45:41 -- common/autotest_common.sh@936 -- # '[' -z 1611636 ']' 00:11:49.220 00:45:41 -- common/autotest_common.sh@940 -- # kill -0 1611636 00:11:49.220 00:45:41 -- common/autotest_common.sh@941 -- # uname 00:11:49.220 00:45:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:49.220 00:45:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1611636 00:11:49.220 00:45:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:49.220 00:45:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:49.220 00:45:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1611636' 00:11:49.220 killing process with pid 1611636 00:11:49.220 00:45:41 -- common/autotest_common.sh@955 -- # kill 1611636 00:11:49.220 [2024-04-27 00:45:41.813150] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:11:49.220 00:45:41 -- common/autotest_common.sh@960 -- # wait 1611636 00:11:49.480 00:45:42 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:49.480 00:45:42 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:49.480 00:45:42 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:49.480 00:45:42 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:49.480 00:45:42 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:49.480 00:45:42 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1620032 00:11:49.480 00:45:42 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1620032' 00:11:49.480 Process pid: 1620032 00:11:49.480 00:45:42 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:49.480 00:45:42 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:49.480 00:45:42 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1620032 00:11:49.480 00:45:42 -- common/autotest_common.sh@817 -- # '[' -z 1620032 ']' 00:11:49.480 00:45:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.480 00:45:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:49.480 00:45:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.480 00:45:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:49.480 00:45:42 -- common/autotest_common.sh@10 -- # set +x 00:11:49.480 [2024-04-27 00:45:42.144025] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:49.480 [2024-04-27 00:45:42.144914] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:11:49.480 [2024-04-27 00:45:42.144953] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.480 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.740 [2024-04-27 00:45:42.199981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.740 [2024-04-27 00:45:42.277640] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.740 [2024-04-27 00:45:42.277681] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.740 [2024-04-27 00:45:42.277688] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.740 [2024-04-27 00:45:42.277695] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.740 [2024-04-27 00:45:42.277700] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.740 [2024-04-27 00:45:42.277747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.740 [2024-04-27 00:45:42.277844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.740 [2024-04-27 00:45:42.277909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.740 [2024-04-27 00:45:42.277910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.740 [2024-04-27 00:45:42.351766] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:11:49.740 [2024-04-27 00:45:42.351907] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:11:49.740 [2024-04-27 00:45:42.352113] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:11:49.740 [2024-04-27 00:45:42.352591] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:49.740 [2024-04-27 00:45:42.352692] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:11:50.307 00:45:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:50.307 00:45:42 -- common/autotest_common.sh@850 -- # return 0 00:11:50.307 00:45:42 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:51.682 00:45:43 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:51.682 00:45:44 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:51.682 00:45:44 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:51.682 00:45:44 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:51.682 00:45:44 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:51.682 00:45:44 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:51.682 Malloc1 00:11:51.682 00:45:44 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:51.941 00:45:44 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:52.199 00:45:44 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:52.199 00:45:44 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:52.199 00:45:44 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:52.199 00:45:44 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:52.458 Malloc2 00:11:52.458 00:45:45 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:52.716 00:45:45 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:52.716 00:45:45 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:52.975 00:45:45 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:52.975 00:45:45 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1620032 00:11:52.975 00:45:45 -- common/autotest_common.sh@936 -- # '[' -z 1620032 ']' 00:11:52.975 00:45:45 -- common/autotest_common.sh@940 -- # kill -0 1620032 00:11:52.975 00:45:45 -- common/autotest_common.sh@941 -- # uname 00:11:52.975 00:45:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:52.975 00:45:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1620032 00:11:52.975 00:45:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:52.975 00:45:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:52.975 00:45:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1620032' 00:11:52.975 killing process with pid 1620032 00:11:52.975 00:45:45 -- common/autotest_common.sh@955 -- # kill 1620032 00:11:52.975 00:45:45 -- common/autotest_common.sh@960 -- # wait 1620032 00:11:53.235 00:45:45 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:53.235 00:45:45 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:53.235 00:11:53.235 real 0m51.151s 00:11:53.235 user 3m22.395s 00:11:53.235 sys 0m3.505s 00:11:53.235 00:45:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:53.235 00:45:45 -- common/autotest_common.sh@10 -- # set +x 00:11:53.235 ************************************ 00:11:53.235 END TEST nvmf_vfio_user 00:11:53.235 ************************************ 00:11:53.235 00:45:45 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:53.235 00:45:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:53.235 00:45:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:53.235 00:45:45 -- common/autotest_common.sh@10 -- # set +x 00:11:53.494 ************************************ 00:11:53.494 START TEST nvmf_vfio_user_nvme_compliance 00:11:53.494 ************************************ 00:11:53.494 00:45:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:53.494 * Looking for test storage... 00:11:53.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:53.494 00:45:46 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.494 00:45:46 -- nvmf/common.sh@7 -- # uname -s 00:11:53.494 00:45:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.494 00:45:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.494 00:45:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.494 00:45:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.494 00:45:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.494 00:45:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.494 00:45:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.494 00:45:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.494 00:45:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.494 00:45:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.494 00:45:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:53.494 00:45:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:53.494 00:45:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.495 00:45:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.495 00:45:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.495 00:45:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.495 00:45:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.495 00:45:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.495 00:45:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.495 00:45:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.495 00:45:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.495 00:45:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.495 00:45:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.495 00:45:46 -- paths/export.sh@5 -- # export PATH 00:11:53.495 00:45:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.495 00:45:46 -- nvmf/common.sh@47 -- # : 0 00:11:53.495 00:45:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.495 00:45:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.495 00:45:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.495 00:45:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.495 00:45:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.495 00:45:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.495 00:45:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.495 00:45:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.495 00:45:46 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:53.495 00:45:46 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:53.495 00:45:46 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:53.495 00:45:46 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:53.495 00:45:46 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:53.495 00:45:46 -- compliance/compliance.sh@20 -- # nvmfpid=1620798 00:11:53.495 00:45:46 -- compliance/compliance.sh@21 -- # echo 'Process pid: 1620798' 00:11:53.495 Process pid: 1620798 00:11:53.495 00:45:46 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:53.495 00:45:46 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:53.495 00:45:46 -- compliance/compliance.sh@24 -- # waitforlisten 1620798 00:11:53.495 00:45:46 -- common/autotest_common.sh@817 -- # '[' -z 1620798 ']' 00:11:53.495 00:45:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.495 00:45:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:53.495 00:45:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.495 00:45:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:53.495 00:45:46 -- common/autotest_common.sh@10 -- # set +x 00:11:53.754 [2024-04-27 00:45:46.211344] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:11:53.754 [2024-04-27 00:45:46.211384] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.754 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.754 [2024-04-27 00:45:46.265924] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:53.754 [2024-04-27 00:45:46.344191] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.754 [2024-04-27 00:45:46.344227] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.754 [2024-04-27 00:45:46.344234] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.754 [2024-04-27 00:45:46.344240] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.754 [2024-04-27 00:45:46.344244] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.754 [2024-04-27 00:45:46.344285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.754 [2024-04-27 00:45:46.344385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.754 [2024-04-27 00:45:46.344385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.690 00:45:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:54.690 00:45:47 -- common/autotest_common.sh@850 -- # return 0 00:11:54.690 00:45:47 -- compliance/compliance.sh@26 -- # sleep 1 00:11:55.626 00:45:48 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:55.626 00:45:48 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:55.626 00:45:48 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:55.626 00:45:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.626 00:45:48 -- common/autotest_common.sh@10 -- # set +x 00:11:55.626 00:45:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.626 00:45:48 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:55.626 00:45:48 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:55.626 00:45:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.626 00:45:48 -- common/autotest_common.sh@10 -- # set +x 00:11:55.626 malloc0 00:11:55.626 00:45:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.626 00:45:48 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:55.626 00:45:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.626 00:45:48 -- common/autotest_common.sh@10 -- # set +x 00:11:55.626 00:45:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.626 00:45:48 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:55.626 00:45:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.626 00:45:48 -- common/autotest_common.sh@10 -- # set +x 00:11:55.626 00:45:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.626 00:45:48 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:55.626 00:45:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:55.626 00:45:48 -- common/autotest_common.sh@10 -- # set +x 00:11:55.626 00:45:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:55.626 00:45:48 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:55.626 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.626 00:11:55.626 00:11:55.626 CUnit - A unit testing framework for C - Version 2.1-3 00:11:55.626 http://cunit.sourceforge.net/ 00:11:55.626 00:11:55.626 00:11:55.626 Suite: nvme_compliance 00:11:55.626 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-27 00:45:48.235484] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:55.626 [2024-04-27 00:45:48.236802] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:55.626 [2024-04-27 00:45:48.236817] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:55.626 [2024-04-27 00:45:48.236823] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:55.626 [2024-04-27 00:45:48.238507] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:55.626 passed 00:11:55.626 Test: admin_identify_ctrlr_verify_fused ...[2024-04-27 00:45:48.318085] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:55.626 [2024-04-27 00:45:48.321107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:55.885 passed 00:11:55.885 Test: admin_identify_ns ...[2024-04-27 00:45:48.406106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:55.885 [2024-04-27 00:45:48.468085] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:55.885 [2024-04-27 00:45:48.474106] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:55.885 [2024-04-27 00:45:48.496176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:55.885 passed 00:11:55.885 Test: admin_get_features_mandatory_features ...[2024-04-27 00:45:48.576019] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:55.885 [2024-04-27 00:45:48.579044] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.144 passed 00:11:56.144 Test: admin_get_features_optional_features ...[2024-04-27 00:45:48.659566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.144 [2024-04-27 00:45:48.662589] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.144 passed 00:11:56.144 Test: admin_set_features_number_of_queues ...[2024-04-27 00:45:48.741562] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.403 [2024-04-27 00:45:48.847168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.403 passed 00:11:56.403 Test: admin_get_log_page_mandatory_logs ...[2024-04-27 00:45:48.926484] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.403 [2024-04-27 00:45:48.929509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.403 passed 00:11:56.403 Test: admin_get_log_page_with_lpo ...[2024-04-27 00:45:49.008484] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.403 [2024-04-27 00:45:49.077089] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:56.403 [2024-04-27 00:45:49.090146] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.661 passed 00:11:56.661 Test: fabric_property_get ...[2024-04-27 00:45:49.169733] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.661 [2024-04-27 00:45:49.170953] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:56.661 [2024-04-27 00:45:49.172750] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.661 passed 00:11:56.661 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-27 00:45:49.251279] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.661 [2024-04-27 00:45:49.252507] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:56.661 [2024-04-27 00:45:49.254299] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.662 passed 00:11:56.662 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-27 00:45:49.333226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.921 [2024-04-27 00:45:49.419082] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:56.921 [2024-04-27 00:45:49.435075] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:56.921 [2024-04-27 00:45:49.440174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.921 passed 00:11:56.921 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-27 00:45:49.515306] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.921 [2024-04-27 00:45:49.516524] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:56.921 [2024-04-27 00:45:49.518320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.921 passed 00:11:56.921 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-27 00:45:49.597193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.179 [2024-04-27 00:45:49.677078] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:57.179 [2024-04-27 00:45:49.701080] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:57.179 [2024-04-27 00:45:49.706172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.179 passed 00:11:57.179 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-27 00:45:49.782377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.179 [2024-04-27 00:45:49.783594] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:57.179 [2024-04-27 00:45:49.783618] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:57.179 [2024-04-27 00:45:49.785405] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.179 passed 00:11:57.179 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-27 00:45:49.863639] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.437 [2024-04-27 00:45:49.959079] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:57.437 [2024-04-27 00:45:49.967081] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:57.437 [2024-04-27 00:45:49.975080] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:57.437 [2024-04-27 00:45:49.983087] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:57.437 [2024-04-27 00:45:50.011249] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.437 passed 00:11:57.437 Test: admin_create_io_sq_verify_pc ...[2024-04-27 00:45:50.090293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.438 [2024-04-27 00:45:50.107089] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:57.438 [2024-04-27 00:45:50.124476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.695 passed 00:11:57.695 Test: admin_create_io_qp_max_qps ...[2024-04-27 00:45:50.205056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:58.648 [2024-04-27 00:45:51.303081] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:59.215 [2024-04-27 00:45:51.690333] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:59.215 passed 00:11:59.215 Test: admin_create_io_sq_shared_cq ...[2024-04-27 00:45:51.769456] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:59.215 [2024-04-27 00:45:51.902080] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:59.474 [2024-04-27 00:45:51.939136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:59.474 passed 00:11:59.474 00:11:59.474 Run Summary: Type Total Ran Passed Failed Inactive 00:11:59.474 suites 1 1 n/a 0 0 00:11:59.474 tests 18 18 18 0 0 00:11:59.474 asserts 360 360 360 0 n/a 00:11:59.474 00:11:59.474 Elapsed time = 1.524 seconds 00:11:59.474 00:45:51 -- compliance/compliance.sh@42 -- # killprocess 1620798 00:11:59.474 00:45:51 -- common/autotest_common.sh@936 -- # '[' -z 1620798 ']' 00:11:59.474 00:45:51 -- common/autotest_common.sh@940 -- # kill -0 1620798 00:11:59.474 00:45:51 -- common/autotest_common.sh@941 -- # uname 00:11:59.474 00:45:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:59.474 00:45:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1620798 00:11:59.474 00:45:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:59.474 00:45:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:59.474 00:45:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1620798' 00:11:59.474 killing process with pid 1620798 00:11:59.474 00:45:52 -- common/autotest_common.sh@955 -- # kill 1620798 00:11:59.474 00:45:52 -- common/autotest_common.sh@960 -- # wait 1620798 00:11:59.734 00:45:52 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:59.734 00:45:52 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:59.734 00:11:59.734 real 0m6.212s 00:11:59.734 user 0m17.729s 00:11:59.734 sys 0m0.477s 00:11:59.734 00:45:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:59.734 00:45:52 -- common/autotest_common.sh@10 -- # set +x 00:11:59.734 ************************************ 00:11:59.734 END TEST nvmf_vfio_user_nvme_compliance 00:11:59.734 ************************************ 00:11:59.734 00:45:52 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:59.734 00:45:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:59.734 00:45:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:59.734 00:45:52 -- common/autotest_common.sh@10 -- # set +x 00:11:59.734 ************************************ 00:11:59.734 START TEST nvmf_vfio_user_fuzz 00:11:59.734 ************************************ 00:11:59.734 00:45:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:59.994 * Looking for test storage... 00:11:59.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.994 00:45:52 -- nvmf/common.sh@7 -- # uname -s 00:11:59.994 00:45:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.994 00:45:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.994 00:45:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.994 00:45:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.994 00:45:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.994 00:45:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.994 00:45:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.994 00:45:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.994 00:45:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.994 00:45:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.994 00:45:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:59.994 00:45:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:59.994 00:45:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.994 00:45:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.994 00:45:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.994 00:45:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.994 00:45:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.994 00:45:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.994 00:45:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.994 00:45:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.994 00:45:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.994 00:45:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.994 00:45:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.994 00:45:52 -- paths/export.sh@5 -- # export PATH 00:11:59.994 00:45:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.994 00:45:52 -- nvmf/common.sh@47 -- # : 0 00:11:59.994 00:45:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:59.994 00:45:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:59.994 00:45:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.994 00:45:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.994 00:45:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.994 00:45:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:59.994 00:45:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:59.994 00:45:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1621805 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1621805' 00:11:59.994 Process pid: 1621805 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1621805 00:11:59.994 00:45:52 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:59.994 00:45:52 -- common/autotest_common.sh@817 -- # '[' -z 1621805 ']' 00:11:59.994 00:45:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.994 00:45:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:59.994 00:45:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.994 00:45:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:59.994 00:45:52 -- common/autotest_common.sh@10 -- # set +x 00:12:00.931 00:45:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:00.931 00:45:53 -- common/autotest_common.sh@850 -- # return 0 00:12:00.931 00:45:53 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:01.866 00:45:54 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:01.866 00:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.866 00:45:54 -- common/autotest_common.sh@10 -- # set +x 00:12:01.866 00:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.866 00:45:54 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:01.866 00:45:54 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:01.866 00:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.866 00:45:54 -- common/autotest_common.sh@10 -- # set +x 00:12:01.866 malloc0 00:12:01.866 00:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.866 00:45:54 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:01.866 00:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.866 00:45:54 -- common/autotest_common.sh@10 -- # set +x 00:12:01.866 00:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.866 00:45:54 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:01.866 00:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.866 00:45:54 -- common/autotest_common.sh@10 -- # set +x 00:12:01.866 00:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.866 00:45:54 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:01.866 00:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:01.866 00:45:54 -- common/autotest_common.sh@10 -- # set +x 00:12:01.866 00:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:01.867 00:45:54 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:01.867 00:45:54 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:33.974 Fuzzing completed. Shutting down the fuzz application 00:12:33.974 00:12:33.974 Dumping successful admin opcodes: 00:12:33.974 8, 9, 10, 24, 00:12:33.974 Dumping successful io opcodes: 00:12:33.974 0, 00:12:33.974 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1020261, total successful commands: 4012, random_seed: 604510592 00:12:33.974 NS: 0x200003a1ef00 admin qp, Total commands completed: 252138, total successful commands: 2035, random_seed: 3682897728 00:12:33.974 00:46:24 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:33.974 00:46:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.974 00:46:24 -- common/autotest_common.sh@10 -- # set +x 00:12:33.974 00:46:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.974 00:46:24 -- target/vfio_user_fuzz.sh@46 -- # killprocess 1621805 00:12:33.974 00:46:24 -- common/autotest_common.sh@936 -- # '[' -z 1621805 ']' 00:12:33.974 00:46:24 -- common/autotest_common.sh@940 -- # kill -0 1621805 00:12:33.974 00:46:24 -- common/autotest_common.sh@941 -- # uname 00:12:33.974 00:46:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:33.974 00:46:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1621805 00:12:33.974 00:46:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:33.974 00:46:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:33.974 00:46:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1621805' 00:12:33.974 killing process with pid 1621805 00:12:33.974 00:46:24 -- common/autotest_common.sh@955 -- # kill 1621805 00:12:33.974 00:46:24 -- common/autotest_common.sh@960 -- # wait 1621805 00:12:33.974 00:46:25 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:33.974 00:46:25 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:33.974 00:12:33.974 real 0m32.824s 00:12:33.974 user 0m32.041s 00:12:33.974 sys 0m29.498s 00:12:33.974 00:46:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:33.974 00:46:25 -- common/autotest_common.sh@10 -- # set +x 00:12:33.974 ************************************ 00:12:33.974 END TEST nvmf_vfio_user_fuzz 00:12:33.974 ************************************ 00:12:33.974 00:46:25 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:33.974 00:46:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:33.974 00:46:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:33.974 00:46:25 -- common/autotest_common.sh@10 -- # set +x 00:12:33.974 ************************************ 00:12:33.974 START TEST nvmf_host_management 00:12:33.974 ************************************ 00:12:33.974 00:46:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:33.974 * Looking for test storage... 00:12:33.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.974 00:46:25 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.974 00:46:25 -- nvmf/common.sh@7 -- # uname -s 00:12:33.974 00:46:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.974 00:46:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.974 00:46:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.974 00:46:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.974 00:46:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.974 00:46:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.974 00:46:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.974 00:46:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.974 00:46:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.974 00:46:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.974 00:46:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.974 00:46:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:33.974 00:46:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.974 00:46:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.974 00:46:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.975 00:46:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.975 00:46:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.975 00:46:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.975 00:46:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.975 00:46:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.975 00:46:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.975 00:46:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.975 00:46:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.975 00:46:25 -- paths/export.sh@5 -- # export PATH 00:12:33.975 00:46:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.975 00:46:25 -- nvmf/common.sh@47 -- # : 0 00:12:33.975 00:46:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.975 00:46:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.975 00:46:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.975 00:46:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.975 00:46:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.975 00:46:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.975 00:46:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.975 00:46:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.975 00:46:25 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:33.975 00:46:25 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:33.975 00:46:25 -- target/host_management.sh@105 -- # nvmftestinit 00:12:33.975 00:46:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:33.975 00:46:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.975 00:46:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:33.975 00:46:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:33.975 00:46:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:33.975 00:46:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.975 00:46:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:33.975 00:46:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.975 00:46:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:33.975 00:46:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:33.975 00:46:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.975 00:46:25 -- common/autotest_common.sh@10 -- # set +x 00:12:38.165 00:46:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:38.165 00:46:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.165 00:46:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.165 00:46:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.165 00:46:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.165 00:46:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.165 00:46:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.165 00:46:30 -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.165 00:46:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.165 00:46:30 -- nvmf/common.sh@296 -- # e810=() 00:12:38.165 00:46:30 -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.165 00:46:30 -- nvmf/common.sh@297 -- # x722=() 00:12:38.165 00:46:30 -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.165 00:46:30 -- nvmf/common.sh@298 -- # mlx=() 00:12:38.165 00:46:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.165 00:46:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.165 00:46:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.165 00:46:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.165 00:46:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.165 00:46:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.165 00:46:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:38.165 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:38.165 00:46:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.165 00:46:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:38.165 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:38.165 00:46:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.165 00:46:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.165 00:46:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.165 00:46:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:38.165 00:46:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.165 00:46:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:38.165 Found net devices under 0000:86:00.0: cvl_0_0 00:12:38.165 00:46:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.165 00:46:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.165 00:46:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.165 00:46:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:38.165 00:46:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.165 00:46:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:38.165 Found net devices under 0000:86:00.1: cvl_0_1 00:12:38.165 00:46:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.165 00:46:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:38.165 00:46:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:38.165 00:46:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:38.165 00:46:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:38.166 00:46:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:38.166 00:46:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.166 00:46:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.166 00:46:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.166 00:46:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.166 00:46:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.166 00:46:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.166 00:46:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.166 00:46:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.166 00:46:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.166 00:46:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.166 00:46:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.166 00:46:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.166 00:46:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.166 00:46:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.166 00:46:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.166 00:46:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.166 00:46:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.166 00:46:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.166 00:46:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.166 00:46:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:12:38.166 00:12:38.166 --- 10.0.0.2 ping statistics --- 00:12:38.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.166 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:12:38.166 00:46:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:12:38.166 00:12:38.166 --- 10.0.0.1 ping statistics --- 00:12:38.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.166 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:12:38.166 00:46:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.166 00:46:30 -- nvmf/common.sh@411 -- # return 0 00:12:38.166 00:46:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:38.166 00:46:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.166 00:46:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:38.166 00:46:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:38.166 00:46:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.166 00:46:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:38.166 00:46:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:38.166 00:46:30 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:12:38.166 00:46:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:38.166 00:46:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:38.166 00:46:30 -- common/autotest_common.sh@10 -- # set +x 00:12:38.166 ************************************ 00:12:38.166 START TEST nvmf_host_management 00:12:38.166 ************************************ 00:12:38.166 00:46:30 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:12:38.166 00:46:30 -- target/host_management.sh@69 -- # starttarget 00:12:38.166 00:46:30 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:38.166 00:46:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:38.166 00:46:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:38.166 00:46:30 -- common/autotest_common.sh@10 -- # set +x 00:12:38.166 00:46:30 -- nvmf/common.sh@470 -- # nvmfpid=1630330 00:12:38.166 00:46:30 -- nvmf/common.sh@471 -- # waitforlisten 1630330 00:12:38.166 00:46:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:38.166 00:46:30 -- common/autotest_common.sh@817 -- # '[' -z 1630330 ']' 00:12:38.166 00:46:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.166 00:46:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:38.166 00:46:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.166 00:46:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:38.166 00:46:30 -- common/autotest_common.sh@10 -- # set +x 00:12:38.166 [2024-04-27 00:46:30.604653] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:12:38.166 [2024-04-27 00:46:30.604690] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.166 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.166 [2024-04-27 00:46:30.663286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.166 [2024-04-27 00:46:30.744447] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.166 [2024-04-27 00:46:30.744486] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.166 [2024-04-27 00:46:30.744493] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.166 [2024-04-27 00:46:30.744500] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.166 [2024-04-27 00:46:30.744505] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.166 [2024-04-27 00:46:30.744604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.166 [2024-04-27 00:46:30.744689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.166 [2024-04-27 00:46:30.744793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:38.166 [2024-04-27 00:46:30.744794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.733 00:46:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:38.733 00:46:31 -- common/autotest_common.sh@850 -- # return 0 00:12:38.733 00:46:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:38.733 00:46:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:38.733 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:38.992 00:46:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.992 00:46:31 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.992 00:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.992 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:38.992 [2024-04-27 00:46:31.458003] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.992 00:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.992 00:46:31 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:38.992 00:46:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:38.992 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:38.992 00:46:31 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:38.993 00:46:31 -- target/host_management.sh@23 -- # cat 00:12:38.993 00:46:31 -- target/host_management.sh@30 -- # rpc_cmd 00:12:38.993 00:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:38.993 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:38.993 Malloc0 00:12:38.993 [2024-04-27 00:46:31.517887] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.993 00:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:38.993 00:46:31 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:38.993 00:46:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:38.993 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:38.993 00:46:31 -- target/host_management.sh@73 -- # perfpid=1630596 00:12:38.993 00:46:31 -- target/host_management.sh@74 -- # waitforlisten 1630596 /var/tmp/bdevperf.sock 00:12:38.993 00:46:31 -- common/autotest_common.sh@817 -- # '[' -z 1630596 ']' 00:12:38.993 00:46:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:38.993 00:46:31 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:38.993 00:46:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:38.993 00:46:31 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:38.993 00:46:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:38.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:38.993 00:46:31 -- nvmf/common.sh@521 -- # config=() 00:12:38.993 00:46:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:38.993 00:46:31 -- nvmf/common.sh@521 -- # local subsystem config 00:12:38.993 00:46:31 -- common/autotest_common.sh@10 -- # set +x 00:12:38.993 00:46:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:38.993 00:46:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:38.993 { 00:12:38.993 "params": { 00:12:38.993 "name": "Nvme$subsystem", 00:12:38.993 "trtype": "$TEST_TRANSPORT", 00:12:38.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:38.993 "adrfam": "ipv4", 00:12:38.993 "trsvcid": "$NVMF_PORT", 00:12:38.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:38.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:38.993 "hdgst": ${hdgst:-false}, 00:12:38.993 "ddgst": ${ddgst:-false} 00:12:38.993 }, 00:12:38.993 "method": "bdev_nvme_attach_controller" 00:12:38.993 } 00:12:38.993 EOF 00:12:38.993 )") 00:12:38.993 00:46:31 -- nvmf/common.sh@543 -- # cat 00:12:38.993 00:46:31 -- nvmf/common.sh@545 -- # jq . 00:12:38.993 00:46:31 -- nvmf/common.sh@546 -- # IFS=, 00:12:38.993 00:46:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:38.993 "params": { 00:12:38.993 "name": "Nvme0", 00:12:38.993 "trtype": "tcp", 00:12:38.993 "traddr": "10.0.0.2", 00:12:38.993 "adrfam": "ipv4", 00:12:38.993 "trsvcid": "4420", 00:12:38.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:38.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:38.993 "hdgst": false, 00:12:38.993 "ddgst": false 00:12:38.993 }, 00:12:38.993 "method": "bdev_nvme_attach_controller" 00:12:38.993 }' 00:12:38.993 [2024-04-27 00:46:31.606294] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:12:38.993 [2024-04-27 00:46:31.606334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1630596 ] 00:12:38.993 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.993 [2024-04-27 00:46:31.661396] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.252 [2024-04-27 00:46:31.734848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.511 Running I/O for 10 seconds... 00:12:39.770 00:46:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:39.770 00:46:32 -- common/autotest_common.sh@850 -- # return 0 00:12:39.770 00:46:32 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:39.770 00:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.770 00:46:32 -- common/autotest_common.sh@10 -- # set +x 00:12:39.770 00:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:39.770 00:46:32 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:39.770 00:46:32 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:39.770 00:46:32 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:39.770 00:46:32 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:39.770 00:46:32 -- target/host_management.sh@52 -- # local ret=1 00:12:39.770 00:46:32 -- target/host_management.sh@53 -- # local i 00:12:39.770 00:46:32 -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:39.770 00:46:32 -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:39.770 00:46:32 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:39.770 00:46:32 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:39.770 00:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:39.770 00:46:32 -- common/autotest_common.sh@10 -- # set +x 00:12:40.031 00:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.031 00:46:32 -- target/host_management.sh@55 -- # read_io_count=451 00:12:40.031 00:46:32 -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:12:40.031 00:46:32 -- target/host_management.sh@59 -- # ret=0 00:12:40.031 00:46:32 -- target/host_management.sh@60 -- # break 00:12:40.031 00:46:32 -- target/host_management.sh@64 -- # return 0 00:12:40.031 00:46:32 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:40.031 00:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.031 00:46:32 -- common/autotest_common.sh@10 -- # set +x 00:12:40.031 [2024-04-27 00:46:32.501076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd02b10 is same with the state(5) to be set 00:12:40.031 [2024-04-27 00:46:32.501139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd02b10 is same with the state(5) to be set 00:12:40.031 [2024-04-27 00:46:32.501147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd02b10 is same with the state(5) to be set 00:12:40.031 [2024-04-27 00:46:32.501153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd02b10 is same with the state(5) to be set 00:12:40.031 [2024-04-27 00:46:32.501160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd02b10 is same with the state(5) to be set 00:12:40.031 [2024-04-27 00:46:32.501166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd02b10 is same with the state(5) to be set 00:12:40.031 [2024-04-27 00:46:32.501172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd02b10 is same with the state(5) to be set 00:12:40.031 [2024-04-27 00:46:32.503084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.031 [2024-04-27 00:46:32.503120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.031 [2024-04-27 00:46:32.503129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.031 [2024-04-27 00:46:32.503138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.031 [2024-04-27 00:46:32.503146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.031 [2024-04-27 00:46:32.503160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.031 [2024-04-27 00:46:32.503168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.031 [2024-04-27 00:46:32.503175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.031 [2024-04-27 00:46:32.503183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e179b0 is same with the state(5) to be set 00:12:40.031 [2024-04-27 00:46:32.503624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.031 [2024-04-27 00:46:32.503638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.031 [2024-04-27 00:46:32.503651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.031 [2024-04-27 00:46:32.503659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.031 [2024-04-27 00:46:32.503668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.031 [2024-04-27 00:46:32.503676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.503986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.503995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.032 [2024-04-27 00:46:32.504342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.032 [2024-04-27 00:46:32.504350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.033 [2024-04-27 00:46:32.504756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.504818] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22492b0 was disconnected and freed. reset controller. 00:12:40.033 [2024-04-27 00:46:32.505714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:40.033 00:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.033 00:46:32 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:40.033 00:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.033 00:46:32 -- common/autotest_common.sh@10 -- # set +x 00:12:40.033 task offset: 65536 on job bdev=Nvme0n1 fails 00:12:40.033 00:12:40.033 Latency(us) 00:12:40.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.033 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:40.033 Job: Nvme0n1 ended in about 0.45 seconds with error 00:12:40.033 Verification LBA range: start 0x0 length 0x400 00:12:40.033 Nvme0n1 : 0.45 1140.10 71.26 142.51 0.00 48791.74 1481.68 53568.56 00:12:40.033 =================================================================================================================== 00:12:40.033 Total : 1140.10 71.26 142.51 0.00 48791.74 1481.68 53568.56 00:12:40.033 [2024-04-27 00:46:32.507324] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:40.033 [2024-04-27 00:46:32.507341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e179b0 (9): Bad file descriptor 00:12:40.033 [2024-04-27 00:46:32.510841] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:12:40.033 [2024-04-27 00:46:32.510984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:40.033 [2024-04-27 00:46:32.511011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.033 [2024-04-27 00:46:32.511026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:12:40.033 [2024-04-27 00:46:32.511035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:12:40.033 [2024-04-27 00:46:32.511043] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:12:40.033 [2024-04-27 00:46:32.511051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1e179b0 00:12:40.033 [2024-04-27 00:46:32.511082] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e179b0 (9): Bad file descriptor 00:12:40.033 [2024-04-27 00:46:32.511096] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:12:40.033 [2024-04-27 00:46:32.511108] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:12:40.033 [2024-04-27 00:46:32.511118] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:12:40.033 [2024-04-27 00:46:32.511132] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:12:40.033 00:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.033 00:46:32 -- target/host_management.sh@87 -- # sleep 1 00:12:40.968 00:46:33 -- target/host_management.sh@91 -- # kill -9 1630596 00:12:40.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1630596) - No such process 00:12:40.968 00:46:33 -- target/host_management.sh@91 -- # true 00:12:40.968 00:46:33 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:40.968 00:46:33 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:40.968 00:46:33 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:40.968 00:46:33 -- nvmf/common.sh@521 -- # config=() 00:12:40.968 00:46:33 -- nvmf/common.sh@521 -- # local subsystem config 00:12:40.968 00:46:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:40.968 00:46:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:40.968 { 00:12:40.968 "params": { 00:12:40.968 "name": "Nvme$subsystem", 00:12:40.968 "trtype": "$TEST_TRANSPORT", 00:12:40.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:40.968 "adrfam": "ipv4", 00:12:40.968 "trsvcid": "$NVMF_PORT", 00:12:40.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:40.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:40.968 "hdgst": ${hdgst:-false}, 00:12:40.968 "ddgst": ${ddgst:-false} 00:12:40.968 }, 00:12:40.968 "method": "bdev_nvme_attach_controller" 00:12:40.968 } 00:12:40.968 EOF 00:12:40.968 )") 00:12:40.968 00:46:33 -- nvmf/common.sh@543 -- # cat 00:12:40.968 00:46:33 -- nvmf/common.sh@545 -- # jq . 00:12:40.968 00:46:33 -- nvmf/common.sh@546 -- # IFS=, 00:12:40.968 00:46:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:40.968 "params": { 00:12:40.968 "name": "Nvme0", 00:12:40.969 "trtype": "tcp", 00:12:40.969 "traddr": "10.0.0.2", 00:12:40.969 "adrfam": "ipv4", 00:12:40.969 "trsvcid": "4420", 00:12:40.969 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:40.969 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:40.969 "hdgst": false, 00:12:40.969 "ddgst": false 00:12:40.969 }, 00:12:40.969 "method": "bdev_nvme_attach_controller" 00:12:40.969 }' 00:12:40.969 [2024-04-27 00:46:33.568205] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:12:40.969 [2024-04-27 00:46:33.568254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1630846 ] 00:12:40.969 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.969 [2024-04-27 00:46:33.622709] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.227 [2024-04-27 00:46:33.692495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.227 Running I/O for 1 seconds... 00:12:42.604 00:12:42.604 Latency(us) 00:12:42.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.604 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:42.604 Verification LBA range: start 0x0 length 0x400 00:12:42.604 Nvme0n1 : 1.06 1089.42 68.09 0.00 0.00 58042.57 10827.69 57899.63 00:12:42.604 =================================================================================================================== 00:12:42.604 Total : 1089.42 68.09 0.00 0.00 58042.57 10827.69 57899.63 00:12:42.604 00:46:35 -- target/host_management.sh@102 -- # stoptarget 00:12:42.604 00:46:35 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:42.604 00:46:35 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:42.604 00:46:35 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:42.604 00:46:35 -- target/host_management.sh@40 -- # nvmftestfini 00:12:42.604 00:46:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:42.604 00:46:35 -- nvmf/common.sh@117 -- # sync 00:12:42.604 00:46:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.604 00:46:35 -- nvmf/common.sh@120 -- # set +e 00:12:42.604 00:46:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.604 00:46:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.604 rmmod nvme_tcp 00:12:42.604 rmmod nvme_fabrics 00:12:42.604 rmmod nvme_keyring 00:12:42.604 00:46:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.604 00:46:35 -- nvmf/common.sh@124 -- # set -e 00:12:42.605 00:46:35 -- nvmf/common.sh@125 -- # return 0 00:12:42.605 00:46:35 -- nvmf/common.sh@478 -- # '[' -n 1630330 ']' 00:12:42.605 00:46:35 -- nvmf/common.sh@479 -- # killprocess 1630330 00:12:42.605 00:46:35 -- common/autotest_common.sh@936 -- # '[' -z 1630330 ']' 00:12:42.605 00:46:35 -- common/autotest_common.sh@940 -- # kill -0 1630330 00:12:42.605 00:46:35 -- common/autotest_common.sh@941 -- # uname 00:12:42.605 00:46:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:42.605 00:46:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1630330 00:12:42.605 00:46:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:42.605 00:46:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:42.605 00:46:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1630330' 00:12:42.605 killing process with pid 1630330 00:12:42.605 00:46:35 -- common/autotest_common.sh@955 -- # kill 1630330 00:12:42.605 00:46:35 -- common/autotest_common.sh@960 -- # wait 1630330 00:12:42.865 [2024-04-27 00:46:35.448836] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:42.865 00:46:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:42.865 00:46:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:42.865 00:46:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:42.865 00:46:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.865 00:46:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:42.865 00:46:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.865 00:46:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.865 00:46:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.419 00:46:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.419 00:12:45.419 real 0m6.985s 00:12:45.419 user 0m21.349s 00:12:45.419 sys 0m1.096s 00:12:45.419 00:46:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:45.419 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:45.419 ************************************ 00:12:45.419 END TEST nvmf_host_management 00:12:45.419 ************************************ 00:12:45.419 00:46:37 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:45.419 00:12:45.419 real 0m12.151s 00:12:45.419 user 0m22.673s 00:12:45.419 sys 0m4.846s 00:12:45.419 00:46:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:45.419 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:45.419 ************************************ 00:12:45.419 END TEST nvmf_host_management 00:12:45.419 ************************************ 00:12:45.419 00:46:37 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:45.419 00:46:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:45.419 00:46:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.419 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:45.419 ************************************ 00:12:45.419 START TEST nvmf_lvol 00:12:45.419 ************************************ 00:12:45.419 00:46:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:45.419 * Looking for test storage... 00:12:45.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.419 00:46:37 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.419 00:46:37 -- nvmf/common.sh@7 -- # uname -s 00:12:45.419 00:46:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.419 00:46:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.419 00:46:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.419 00:46:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.419 00:46:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.419 00:46:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.419 00:46:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.419 00:46:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.419 00:46:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.419 00:46:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.419 00:46:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:45.419 00:46:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:45.419 00:46:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.419 00:46:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.419 00:46:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.419 00:46:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.419 00:46:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.419 00:46:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.419 00:46:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.419 00:46:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.420 00:46:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.420 00:46:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.420 00:46:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.420 00:46:37 -- paths/export.sh@5 -- # export PATH 00:12:45.420 00:46:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.420 00:46:37 -- nvmf/common.sh@47 -- # : 0 00:12:45.420 00:46:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.420 00:46:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.420 00:46:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.420 00:46:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.420 00:46:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.420 00:46:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.420 00:46:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.420 00:46:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.420 00:46:37 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.420 00:46:37 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.420 00:46:37 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:45.420 00:46:37 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:45.420 00:46:37 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.420 00:46:37 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:45.420 00:46:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:45.420 00:46:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.420 00:46:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:45.420 00:46:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:45.420 00:46:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:45.420 00:46:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.420 00:46:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.420 00:46:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.420 00:46:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:45.420 00:46:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:45.420 00:46:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.420 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:50.712 00:46:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:50.712 00:46:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:50.712 00:46:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:50.712 00:46:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:50.712 00:46:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:50.712 00:46:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:50.712 00:46:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:50.712 00:46:43 -- nvmf/common.sh@295 -- # net_devs=() 00:12:50.712 00:46:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:50.712 00:46:43 -- nvmf/common.sh@296 -- # e810=() 00:12:50.712 00:46:43 -- nvmf/common.sh@296 -- # local -ga e810 00:12:50.712 00:46:43 -- nvmf/common.sh@297 -- # x722=() 00:12:50.712 00:46:43 -- nvmf/common.sh@297 -- # local -ga x722 00:12:50.712 00:46:43 -- nvmf/common.sh@298 -- # mlx=() 00:12:50.712 00:46:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:50.712 00:46:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.712 00:46:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:50.712 00:46:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:50.712 00:46:43 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:50.712 00:46:43 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:50.712 00:46:43 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:50.712 00:46:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:50.712 00:46:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.712 00:46:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:50.712 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:50.712 00:46:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:50.712 00:46:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.713 00:46:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:50.713 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:50.713 00:46:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:50.713 00:46:43 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.713 00:46:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.713 00:46:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:50.713 00:46:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.713 00:46:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:50.713 Found net devices under 0000:86:00.0: cvl_0_0 00:12:50.713 00:46:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.713 00:46:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.713 00:46:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.713 00:46:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:50.713 00:46:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.713 00:46:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:50.713 Found net devices under 0000:86:00.1: cvl_0_1 00:12:50.713 00:46:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.713 00:46:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:50.713 00:46:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:50.713 00:46:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:50.713 00:46:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.713 00:46:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.713 00:46:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.713 00:46:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:50.713 00:46:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.713 00:46:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.713 00:46:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:50.713 00:46:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.713 00:46:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.713 00:46:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:50.713 00:46:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:50.713 00:46:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.713 00:46:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.713 00:46:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.713 00:46:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.713 00:46:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:50.713 00:46:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.713 00:46:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.713 00:46:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.713 00:46:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:50.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:12:50.713 00:12:50.713 --- 10.0.0.2 ping statistics --- 00:12:50.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.713 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:50.713 00:46:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:12:50.713 00:12:50.713 --- 10.0.0.1 ping statistics --- 00:12:50.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.713 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:12:50.713 00:46:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.713 00:46:43 -- nvmf/common.sh@411 -- # return 0 00:12:50.713 00:46:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:50.713 00:46:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.713 00:46:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:50.713 00:46:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.713 00:46:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:50.713 00:46:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:50.713 00:46:43 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:50.713 00:46:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:50.713 00:46:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:50.713 00:46:43 -- common/autotest_common.sh@10 -- # set +x 00:12:50.713 00:46:43 -- nvmf/common.sh@470 -- # nvmfpid=1634632 00:12:50.713 00:46:43 -- nvmf/common.sh@471 -- # waitforlisten 1634632 00:12:50.713 00:46:43 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:50.713 00:46:43 -- common/autotest_common.sh@817 -- # '[' -z 1634632 ']' 00:12:50.713 00:46:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.713 00:46:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:50.713 00:46:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.713 00:46:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:50.713 00:46:43 -- common/autotest_common.sh@10 -- # set +x 00:12:50.713 [2024-04-27 00:46:43.362157] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:12:50.713 [2024-04-27 00:46:43.362204] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.713 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.972 [2024-04-27 00:46:43.419090] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.972 [2024-04-27 00:46:43.497561] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.972 [2024-04-27 00:46:43.497598] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.972 [2024-04-27 00:46:43.497605] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.972 [2024-04-27 00:46:43.497612] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.972 [2024-04-27 00:46:43.497617] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.972 [2024-04-27 00:46:43.497654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.972 [2024-04-27 00:46:43.497737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.972 [2024-04-27 00:46:43.497738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.540 00:46:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:51.540 00:46:44 -- common/autotest_common.sh@850 -- # return 0 00:12:51.540 00:46:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:51.540 00:46:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:51.540 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.540 00:46:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.540 00:46:44 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:51.799 [2024-04-27 00:46:44.350458] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.799 00:46:44 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.058 00:46:44 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:52.058 00:46:44 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.058 00:46:44 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:52.058 00:46:44 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:52.317 00:46:44 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:52.576 00:46:45 -- target/nvmf_lvol.sh@29 -- # lvs=0675fdda-1e7b-4bfd-a43f-bde725ab446a 00:12:52.576 00:46:45 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0675fdda-1e7b-4bfd-a43f-bde725ab446a lvol 20 00:12:52.835 00:46:45 -- target/nvmf_lvol.sh@32 -- # lvol=361653d4-375a-4079-bedb-8a865ca33df4 00:12:52.835 00:46:45 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:52.835 00:46:45 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 361653d4-375a-4079-bedb-8a865ca33df4 00:12:53.093 00:46:45 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:53.351 [2024-04-27 00:46:45.827805] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.351 00:46:45 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:53.351 00:46:46 -- target/nvmf_lvol.sh@42 -- # perf_pid=1635121 00:12:53.351 00:46:46 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:53.351 00:46:46 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:53.610 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.547 00:46:47 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 361653d4-375a-4079-bedb-8a865ca33df4 MY_SNAPSHOT 00:12:54.807 00:46:47 -- target/nvmf_lvol.sh@47 -- # snapshot=825ab83f-fc80-40d2-9a46-856d065fb3e6 00:12:54.807 00:46:47 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 361653d4-375a-4079-bedb-8a865ca33df4 30 00:12:54.807 00:46:47 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 825ab83f-fc80-40d2-9a46-856d065fb3e6 MY_CLONE 00:12:55.067 00:46:47 -- target/nvmf_lvol.sh@49 -- # clone=d9aea693-f3d9-45b8-bce5-5966c90a7781 00:12:55.067 00:46:47 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d9aea693-f3d9-45b8-bce5-5966c90a7781 00:12:55.638 00:46:48 -- target/nvmf_lvol.sh@53 -- # wait 1635121 00:13:03.769 Initializing NVMe Controllers 00:13:03.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:03.769 Controller IO queue size 128, less than required. 00:13:03.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:03.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:03.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:03.769 Initialization complete. Launching workers. 00:13:03.769 ======================================================== 00:13:03.769 Latency(us) 00:13:03.769 Device Information : IOPS MiB/s Average min max 00:13:03.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11695.06 45.68 10951.36 2153.65 68513.22 00:13:03.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11393.26 44.50 11236.85 2731.83 49307.32 00:13:03.769 ======================================================== 00:13:03.769 Total : 23088.32 90.19 11092.24 2153.65 68513.22 00:13:03.769 00:13:03.769 00:46:56 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:04.028 00:46:56 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 361653d4-375a-4079-bedb-8a865ca33df4 00:13:04.287 00:46:56 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0675fdda-1e7b-4bfd-a43f-bde725ab446a 00:13:04.287 00:46:56 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:04.287 00:46:56 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:04.287 00:46:56 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:04.287 00:46:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:04.287 00:46:56 -- nvmf/common.sh@117 -- # sync 00:13:04.287 00:46:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.287 00:46:56 -- nvmf/common.sh@120 -- # set +e 00:13:04.287 00:46:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.287 00:46:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.287 rmmod nvme_tcp 00:13:04.287 rmmod nvme_fabrics 00:13:04.287 rmmod nvme_keyring 00:13:04.547 00:46:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.547 00:46:56 -- nvmf/common.sh@124 -- # set -e 00:13:04.547 00:46:56 -- nvmf/common.sh@125 -- # return 0 00:13:04.547 00:46:56 -- nvmf/common.sh@478 -- # '[' -n 1634632 ']' 00:13:04.547 00:46:56 -- nvmf/common.sh@479 -- # killprocess 1634632 00:13:04.547 00:46:56 -- common/autotest_common.sh@936 -- # '[' -z 1634632 ']' 00:13:04.547 00:46:56 -- common/autotest_common.sh@940 -- # kill -0 1634632 00:13:04.547 00:46:56 -- common/autotest_common.sh@941 -- # uname 00:13:04.547 00:46:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:04.547 00:46:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1634632 00:13:04.547 00:46:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:04.547 00:46:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:04.547 00:46:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1634632' 00:13:04.547 killing process with pid 1634632 00:13:04.547 00:46:57 -- common/autotest_common.sh@955 -- # kill 1634632 00:13:04.547 00:46:57 -- common/autotest_common.sh@960 -- # wait 1634632 00:13:04.807 00:46:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:04.807 00:46:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:04.807 00:46:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:04.807 00:46:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.807 00:46:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.807 00:46:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.807 00:46:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.807 00:46:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.711 00:46:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.711 00:13:06.711 real 0m21.643s 00:13:06.711 user 1m3.755s 00:13:06.711 sys 0m6.827s 00:13:06.711 00:46:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:06.711 00:46:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.711 ************************************ 00:13:06.711 END TEST nvmf_lvol 00:13:06.711 ************************************ 00:13:06.711 00:46:59 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:06.711 00:46:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:06.711 00:46:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.711 00:46:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.971 ************************************ 00:13:06.971 START TEST nvmf_lvs_grow 00:13:06.971 ************************************ 00:13:06.971 00:46:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:06.971 * Looking for test storage... 00:13:06.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.971 00:46:59 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.971 00:46:59 -- nvmf/common.sh@7 -- # uname -s 00:13:06.971 00:46:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.971 00:46:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.971 00:46:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.971 00:46:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.971 00:46:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.971 00:46:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.971 00:46:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.971 00:46:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.971 00:46:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.971 00:46:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.971 00:46:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:06.971 00:46:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:06.971 00:46:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.971 00:46:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.971 00:46:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.971 00:46:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.971 00:46:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.971 00:46:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.971 00:46:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.971 00:46:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.971 00:46:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.971 00:46:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.971 00:46:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.971 00:46:59 -- paths/export.sh@5 -- # export PATH 00:13:06.971 00:46:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.971 00:46:59 -- nvmf/common.sh@47 -- # : 0 00:13:06.971 00:46:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.971 00:46:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.971 00:46:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.971 00:46:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.971 00:46:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.971 00:46:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.971 00:46:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.971 00:46:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.971 00:46:59 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.971 00:46:59 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:06.971 00:46:59 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:13:06.971 00:46:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:06.971 00:46:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.971 00:46:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:06.971 00:46:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:06.971 00:46:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:06.971 00:46:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.971 00:46:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.971 00:46:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.230 00:46:59 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:07.230 00:46:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:07.230 00:46:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.230 00:46:59 -- common/autotest_common.sh@10 -- # set +x 00:13:12.502 00:47:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:12.502 00:47:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.502 00:47:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.502 00:47:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.502 00:47:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.502 00:47:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.502 00:47:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.502 00:47:04 -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.502 00:47:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.502 00:47:04 -- nvmf/common.sh@296 -- # e810=() 00:13:12.502 00:47:04 -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.502 00:47:04 -- nvmf/common.sh@297 -- # x722=() 00:13:12.502 00:47:04 -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.502 00:47:04 -- nvmf/common.sh@298 -- # mlx=() 00:13:12.502 00:47:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.502 00:47:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.502 00:47:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.502 00:47:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:12.502 00:47:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.502 00:47:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.502 00:47:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:12.502 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:12.502 00:47:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.502 00:47:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:12.502 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:12.502 00:47:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.502 00:47:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:12.502 00:47:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.502 00:47:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.502 00:47:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:12.502 00:47:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.502 00:47:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:12.502 Found net devices under 0000:86:00.0: cvl_0_0 00:13:12.503 00:47:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.503 00:47:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.503 00:47:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.503 00:47:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:12.503 00:47:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.503 00:47:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:12.503 Found net devices under 0000:86:00.1: cvl_0_1 00:13:12.503 00:47:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.503 00:47:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:12.503 00:47:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:12.503 00:47:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:12.503 00:47:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:12.503 00:47:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:12.503 00:47:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.503 00:47:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.503 00:47:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.503 00:47:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.503 00:47:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.503 00:47:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.503 00:47:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.503 00:47:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.503 00:47:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.503 00:47:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.503 00:47:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.503 00:47:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.503 00:47:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.503 00:47:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.503 00:47:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.503 00:47:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.503 00:47:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.503 00:47:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.503 00:47:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.503 00:47:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:13:12.503 00:13:12.503 --- 10.0.0.2 ping statistics --- 00:13:12.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.503 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:13:12.503 00:47:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:13:12.503 00:13:12.503 --- 10.0.0.1 ping statistics --- 00:13:12.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.503 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:13:12.503 00:47:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.503 00:47:04 -- nvmf/common.sh@411 -- # return 0 00:13:12.503 00:47:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:12.503 00:47:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.503 00:47:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:12.503 00:47:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:12.503 00:47:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.503 00:47:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:12.503 00:47:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:12.503 00:47:04 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:13:12.503 00:47:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:12.503 00:47:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:12.503 00:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:12.503 00:47:05 -- nvmf/common.sh@470 -- # nvmfpid=1640488 00:13:12.503 00:47:05 -- nvmf/common.sh@471 -- # waitforlisten 1640488 00:13:12.503 00:47:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:12.503 00:47:05 -- common/autotest_common.sh@817 -- # '[' -z 1640488 ']' 00:13:12.503 00:47:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.503 00:47:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:12.503 00:47:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.503 00:47:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:12.503 00:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:12.503 [2024-04-27 00:47:05.055471] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:12.503 [2024-04-27 00:47:05.055512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.503 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.503 [2024-04-27 00:47:05.112708] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.503 [2024-04-27 00:47:05.182760] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.503 [2024-04-27 00:47:05.182804] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.503 [2024-04-27 00:47:05.182810] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.503 [2024-04-27 00:47:05.182816] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.503 [2024-04-27 00:47:05.182821] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.503 [2024-04-27 00:47:05.182837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.441 00:47:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:13.441 00:47:05 -- common/autotest_common.sh@850 -- # return 0 00:13:13.441 00:47:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:13.441 00:47:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:13.441 00:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:13.441 00:47:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.441 00:47:05 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:13.441 [2024-04-27 00:47:06.043012] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.441 00:47:06 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:13:13.441 00:47:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:13.441 00:47:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:13.441 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:13.701 ************************************ 00:13:13.701 START TEST lvs_grow_clean 00:13:13.701 ************************************ 00:13:13.701 00:47:06 -- common/autotest_common.sh@1111 -- # lvs_grow 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:13.701 00:47:06 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:13.960 00:47:06 -- target/nvmf_lvs_grow.sh@28 -- # lvs=9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:13.960 00:47:06 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:13.960 00:47:06 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:14.219 00:47:06 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:14.219 00:47:06 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:14.219 00:47:06 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 lvol 150 00:13:14.219 00:47:06 -- target/nvmf_lvs_grow.sh@33 -- # lvol=ff2ba4e3-51b1-4eb6-9ee0-e1be226bb638 00:13:14.219 00:47:06 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:14.478 00:47:06 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:14.478 [2024-04-27 00:47:07.064543] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:14.478 [2024-04-27 00:47:07.064596] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:14.478 true 00:13:14.478 00:47:07 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:14.478 00:47:07 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:14.738 00:47:07 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:14.738 00:47:07 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:14.998 00:47:07 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ff2ba4e3-51b1-4eb6-9ee0-e1be226bb638 00:13:14.998 00:47:07 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:15.258 [2024-04-27 00:47:07.754632] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.258 00:47:07 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:15.258 00:47:07 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1641001 00:13:15.258 00:47:07 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:15.258 00:47:07 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:15.258 00:47:07 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1641001 /var/tmp/bdevperf.sock 00:13:15.258 00:47:07 -- common/autotest_common.sh@817 -- # '[' -z 1641001 ']' 00:13:15.258 00:47:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:15.258 00:47:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:15.258 00:47:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:15.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:15.258 00:47:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:15.258 00:47:07 -- common/autotest_common.sh@10 -- # set +x 00:13:15.518 [2024-04-27 00:47:07.975874] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:15.518 [2024-04-27 00:47:07.975918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1641001 ] 00:13:15.518 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.518 [2024-04-27 00:47:08.027644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.518 [2024-04-27 00:47:08.104273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.086 00:47:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:16.086 00:47:08 -- common/autotest_common.sh@850 -- # return 0 00:13:16.086 00:47:08 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:16.654 Nvme0n1 00:13:16.654 00:47:09 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:16.654 [ 00:13:16.654 { 00:13:16.654 "name": "Nvme0n1", 00:13:16.654 "aliases": [ 00:13:16.654 "ff2ba4e3-51b1-4eb6-9ee0-e1be226bb638" 00:13:16.654 ], 00:13:16.654 "product_name": "NVMe disk", 00:13:16.654 "block_size": 4096, 00:13:16.654 "num_blocks": 38912, 00:13:16.655 "uuid": "ff2ba4e3-51b1-4eb6-9ee0-e1be226bb638", 00:13:16.655 "assigned_rate_limits": { 00:13:16.655 "rw_ios_per_sec": 0, 00:13:16.655 "rw_mbytes_per_sec": 0, 00:13:16.655 "r_mbytes_per_sec": 0, 00:13:16.655 "w_mbytes_per_sec": 0 00:13:16.655 }, 00:13:16.655 "claimed": false, 00:13:16.655 "zoned": false, 00:13:16.655 "supported_io_types": { 00:13:16.655 "read": true, 00:13:16.655 "write": true, 00:13:16.655 "unmap": true, 00:13:16.655 "write_zeroes": true, 00:13:16.655 "flush": true, 00:13:16.655 "reset": true, 00:13:16.655 "compare": true, 00:13:16.655 "compare_and_write": true, 00:13:16.655 "abort": true, 00:13:16.655 "nvme_admin": true, 00:13:16.655 "nvme_io": true 00:13:16.655 }, 00:13:16.655 "memory_domains": [ 00:13:16.655 { 00:13:16.655 "dma_device_id": "system", 00:13:16.655 "dma_device_type": 1 00:13:16.655 } 00:13:16.655 ], 00:13:16.655 "driver_specific": { 00:13:16.655 "nvme": [ 00:13:16.655 { 00:13:16.655 "trid": { 00:13:16.655 "trtype": "TCP", 00:13:16.655 "adrfam": "IPv4", 00:13:16.655 "traddr": "10.0.0.2", 00:13:16.655 "trsvcid": "4420", 00:13:16.655 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:16.655 }, 00:13:16.655 "ctrlr_data": { 00:13:16.655 "cntlid": 1, 00:13:16.655 "vendor_id": "0x8086", 00:13:16.655 "model_number": "SPDK bdev Controller", 00:13:16.655 "serial_number": "SPDK0", 00:13:16.655 "firmware_revision": "24.05", 00:13:16.655 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:16.655 "oacs": { 00:13:16.655 "security": 0, 00:13:16.655 "format": 0, 00:13:16.655 "firmware": 0, 00:13:16.655 "ns_manage": 0 00:13:16.655 }, 00:13:16.655 "multi_ctrlr": true, 00:13:16.655 "ana_reporting": false 00:13:16.655 }, 00:13:16.655 "vs": { 00:13:16.655 "nvme_version": "1.3" 00:13:16.655 }, 00:13:16.655 "ns_data": { 00:13:16.655 "id": 1, 00:13:16.655 "can_share": true 00:13:16.655 } 00:13:16.655 } 00:13:16.655 ], 00:13:16.655 "mp_policy": "active_passive" 00:13:16.655 } 00:13:16.655 } 00:13:16.655 ] 00:13:16.655 00:47:09 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1641233 00:13:16.655 00:47:09 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:16.655 00:47:09 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:16.930 Running I/O for 10 seconds... 00:13:17.867 Latency(us) 00:13:17.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.867 Nvme0n1 : 1.00 21704.00 84.78 0.00 0.00 0.00 0.00 0.00 00:13:17.867 =================================================================================================================== 00:13:17.867 Total : 21704.00 84.78 0.00 0.00 0.00 0.00 0.00 00:13:17.867 00:13:18.806 00:47:11 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:18.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.806 Nvme0n1 : 2.00 22122.50 86.42 0.00 0.00 0.00 0.00 0.00 00:13:18.806 =================================================================================================================== 00:13:18.806 Total : 22122.50 86.42 0.00 0.00 0.00 0.00 0.00 00:13:18.806 00:13:18.806 true 00:13:18.806 00:47:11 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:18.806 00:47:11 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:19.066 00:47:11 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:19.066 00:47:11 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:19.066 00:47:11 -- target/nvmf_lvs_grow.sh@65 -- # wait 1641233 00:13:20.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.041 Nvme0n1 : 3.00 22279.00 87.03 0.00 0.00 0.00 0.00 0.00 00:13:20.041 =================================================================================================================== 00:13:20.041 Total : 22279.00 87.03 0.00 0.00 0.00 0.00 0.00 00:13:20.041 00:13:21.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.003 Nvme0n1 : 4.00 22375.25 87.40 0.00 0.00 0.00 0.00 0.00 00:13:21.003 =================================================================================================================== 00:13:21.003 Total : 22375.25 87.40 0.00 0.00 0.00 0.00 0.00 00:13:21.003 00:13:21.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.941 Nvme0n1 : 5.00 22338.20 87.26 0.00 0.00 0.00 0.00 0.00 00:13:21.941 =================================================================================================================== 00:13:21.941 Total : 22338.20 87.26 0.00 0.00 0.00 0.00 0.00 00:13:21.941 00:13:22.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.879 Nvme0n1 : 6.00 22309.83 87.15 0.00 0.00 0.00 0.00 0.00 00:13:22.879 =================================================================================================================== 00:13:22.879 Total : 22309.83 87.15 0.00 0.00 0.00 0.00 0.00 00:13:22.879 00:13:23.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.817 Nvme0n1 : 7.00 22286.14 87.06 0.00 0.00 0.00 0.00 0.00 00:13:23.817 =================================================================================================================== 00:13:23.817 Total : 22286.14 87.06 0.00 0.00 0.00 0.00 0.00 00:13:23.817 00:13:24.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.755 Nvme0n1 : 8.00 22266.75 86.98 0.00 0.00 0.00 0.00 0.00 00:13:24.755 =================================================================================================================== 00:13:24.755 Total : 22266.75 86.98 0.00 0.00 0.00 0.00 0.00 00:13:24.755 00:13:26.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.135 Nvme0n1 : 9.00 22253.11 86.93 0.00 0.00 0.00 0.00 0.00 00:13:26.135 =================================================================================================================== 00:13:26.135 Total : 22253.11 86.93 0.00 0.00 0.00 0.00 0.00 00:13:26.135 00:13:27.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.074 Nvme0n1 : 10.00 22259.80 86.95 0.00 0.00 0.00 0.00 0.00 00:13:27.074 =================================================================================================================== 00:13:27.074 Total : 22259.80 86.95 0.00 0.00 0.00 0.00 0.00 00:13:27.074 00:13:27.074 00:13:27.074 Latency(us) 00:13:27.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.074 Nvme0n1 : 10.00 22261.95 86.96 0.00 0.00 5746.31 3177.07 25644.52 00:13:27.074 =================================================================================================================== 00:13:27.074 Total : 22261.95 86.96 0.00 0.00 5746.31 3177.07 25644.52 00:13:27.074 0 00:13:27.074 00:47:19 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1641001 00:13:27.074 00:47:19 -- common/autotest_common.sh@936 -- # '[' -z 1641001 ']' 00:13:27.074 00:47:19 -- common/autotest_common.sh@940 -- # kill -0 1641001 00:13:27.074 00:47:19 -- common/autotest_common.sh@941 -- # uname 00:13:27.074 00:47:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:27.074 00:47:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1641001 00:13:27.074 00:47:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:27.074 00:47:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:27.074 00:47:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1641001' 00:13:27.074 killing process with pid 1641001 00:13:27.074 00:47:19 -- common/autotest_common.sh@955 -- # kill 1641001 00:13:27.074 Received shutdown signal, test time was about 10.000000 seconds 00:13:27.074 00:13:27.074 Latency(us) 00:13:27.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.074 =================================================================================================================== 00:13:27.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.074 00:47:19 -- common/autotest_common.sh@960 -- # wait 1641001 00:13:27.074 00:47:19 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:27.334 00:47:19 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:27.334 00:47:19 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:13:27.593 00:47:20 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:13:27.593 00:47:20 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:13:27.593 00:47:20 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:27.593 [2024-04-27 00:47:20.235578] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:27.855 00:47:20 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:27.855 00:47:20 -- common/autotest_common.sh@638 -- # local es=0 00:13:27.855 00:47:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:27.855 00:47:20 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.855 00:47:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:27.855 00:47:20 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.855 00:47:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:27.855 00:47:20 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.855 00:47:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:27.855 00:47:20 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.855 00:47:20 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:27.855 00:47:20 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:27.855 request: 00:13:27.855 { 00:13:27.855 "uuid": "9edc1ca3-a9c7-45f7-9c1f-5b25027bb998", 00:13:27.855 "method": "bdev_lvol_get_lvstores", 00:13:27.855 "req_id": 1 00:13:27.855 } 00:13:27.855 Got JSON-RPC error response 00:13:27.855 response: 00:13:27.855 { 00:13:27.855 "code": -19, 00:13:27.855 "message": "No such device" 00:13:27.855 } 00:13:27.855 00:47:20 -- common/autotest_common.sh@641 -- # es=1 00:13:27.855 00:47:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:27.855 00:47:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:27.855 00:47:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:27.855 00:47:20 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:28.114 aio_bdev 00:13:28.114 00:47:20 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev ff2ba4e3-51b1-4eb6-9ee0-e1be226bb638 00:13:28.114 00:47:20 -- common/autotest_common.sh@885 -- # local bdev_name=ff2ba4e3-51b1-4eb6-9ee0-e1be226bb638 00:13:28.114 00:47:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:28.114 00:47:20 -- common/autotest_common.sh@887 -- # local i 00:13:28.114 00:47:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:28.114 00:47:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:28.114 00:47:20 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:28.374 00:47:20 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ff2ba4e3-51b1-4eb6-9ee0-e1be226bb638 -t 2000 00:13:28.374 [ 00:13:28.374 { 00:13:28.374 "name": "ff2ba4e3-51b1-4eb6-9ee0-e1be226bb638", 00:13:28.374 "aliases": [ 00:13:28.374 "lvs/lvol" 00:13:28.374 ], 00:13:28.374 "product_name": "Logical Volume", 00:13:28.374 "block_size": 4096, 00:13:28.374 "num_blocks": 38912, 00:13:28.374 "uuid": "ff2ba4e3-51b1-4eb6-9ee0-e1be226bb638", 00:13:28.374 "assigned_rate_limits": { 00:13:28.374 "rw_ios_per_sec": 0, 00:13:28.374 "rw_mbytes_per_sec": 0, 00:13:28.374 "r_mbytes_per_sec": 0, 00:13:28.374 "w_mbytes_per_sec": 0 00:13:28.374 }, 00:13:28.374 "claimed": false, 00:13:28.374 "zoned": false, 00:13:28.374 "supported_io_types": { 00:13:28.374 "read": true, 00:13:28.374 "write": true, 00:13:28.374 "unmap": true, 00:13:28.374 "write_zeroes": true, 00:13:28.374 "flush": false, 00:13:28.374 "reset": true, 00:13:28.374 "compare": false, 00:13:28.374 "compare_and_write": false, 00:13:28.374 "abort": false, 00:13:28.374 "nvme_admin": false, 00:13:28.374 "nvme_io": false 00:13:28.374 }, 00:13:28.374 "driver_specific": { 00:13:28.374 "lvol": { 00:13:28.374 "lvol_store_uuid": "9edc1ca3-a9c7-45f7-9c1f-5b25027bb998", 00:13:28.374 "base_bdev": "aio_bdev", 00:13:28.374 "thin_provision": false, 00:13:28.374 "snapshot": false, 00:13:28.374 "clone": false, 00:13:28.374 "esnap_clone": false 00:13:28.374 } 00:13:28.374 } 00:13:28.374 } 00:13:28.374 ] 00:13:28.374 00:47:20 -- common/autotest_common.sh@893 -- # return 0 00:13:28.374 00:47:20 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:28.374 00:47:20 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:13:28.633 00:47:21 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:13:28.633 00:47:21 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:28.633 00:47:21 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:13:28.633 00:47:21 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:13:28.633 00:47:21 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ff2ba4e3-51b1-4eb6-9ee0-e1be226bb638 00:13:28.892 00:47:21 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9edc1ca3-a9c7-45f7-9c1f-5b25027bb998 00:13:29.151 00:47:21 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:29.151 00:47:21 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.410 00:13:29.410 real 0m15.679s 00:13:29.410 user 0m15.374s 00:13:29.410 sys 0m1.385s 00:13:29.410 00:47:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:29.410 00:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:29.410 ************************************ 00:13:29.410 END TEST lvs_grow_clean 00:13:29.410 ************************************ 00:13:29.410 00:47:21 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:29.410 00:47:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:29.410 00:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.410 00:47:21 -- common/autotest_common.sh@10 -- # set +x 00:13:29.410 ************************************ 00:13:29.410 START TEST lvs_grow_dirty 00:13:29.410 ************************************ 00:13:29.410 00:47:22 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:13:29.410 00:47:22 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:29.410 00:47:22 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:29.410 00:47:22 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:29.410 00:47:22 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:29.410 00:47:22 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:29.410 00:47:22 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:29.410 00:47:22 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.410 00:47:22 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.410 00:47:22 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:29.670 00:47:22 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:29.670 00:47:22 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:29.929 00:47:22 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:29.929 00:47:22 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:29.929 00:47:22 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:29.929 00:47:22 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:29.929 00:47:22 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:29.929 00:47:22 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f lvol 150 00:13:30.188 00:47:22 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d2b6810f-4fae-436a-b765-6042e9f0096a 00:13:30.188 00:47:22 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:30.188 00:47:22 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:30.453 [2024-04-27 00:47:22.901530] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:30.453 [2024-04-27 00:47:22.901580] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:30.453 true 00:13:30.453 00:47:22 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:30.453 00:47:22 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:30.453 00:47:23 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:30.453 00:47:23 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:30.711 00:47:23 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d2b6810f-4fae-436a-b765-6042e9f0096a 00:13:30.970 00:47:23 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:30.971 00:47:23 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:31.255 00:47:23 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1643602 00:13:31.255 00:47:23 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:31.255 00:47:23 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1643602 /var/tmp/bdevperf.sock 00:13:31.255 00:47:23 -- common/autotest_common.sh@817 -- # '[' -z 1643602 ']' 00:13:31.255 00:47:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.255 00:47:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:31.255 00:47:23 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:31.255 00:47:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.255 00:47:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:31.255 00:47:23 -- common/autotest_common.sh@10 -- # set +x 00:13:31.255 [2024-04-27 00:47:23.776296] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:31.255 [2024-04-27 00:47:23.776342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1643602 ] 00:13:31.255 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.255 [2024-04-27 00:47:23.829617] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.255 [2024-04-27 00:47:23.902173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.193 00:47:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:32.193 00:47:24 -- common/autotest_common.sh@850 -- # return 0 00:13:32.193 00:47:24 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:32.193 Nvme0n1 00:13:32.193 00:47:24 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:32.452 [ 00:13:32.452 { 00:13:32.452 "name": "Nvme0n1", 00:13:32.452 "aliases": [ 00:13:32.452 "d2b6810f-4fae-436a-b765-6042e9f0096a" 00:13:32.452 ], 00:13:32.452 "product_name": "NVMe disk", 00:13:32.452 "block_size": 4096, 00:13:32.452 "num_blocks": 38912, 00:13:32.452 "uuid": "d2b6810f-4fae-436a-b765-6042e9f0096a", 00:13:32.452 "assigned_rate_limits": { 00:13:32.452 "rw_ios_per_sec": 0, 00:13:32.452 "rw_mbytes_per_sec": 0, 00:13:32.452 "r_mbytes_per_sec": 0, 00:13:32.452 "w_mbytes_per_sec": 0 00:13:32.452 }, 00:13:32.452 "claimed": false, 00:13:32.452 "zoned": false, 00:13:32.452 "supported_io_types": { 00:13:32.452 "read": true, 00:13:32.452 "write": true, 00:13:32.452 "unmap": true, 00:13:32.452 "write_zeroes": true, 00:13:32.452 "flush": true, 00:13:32.452 "reset": true, 00:13:32.452 "compare": true, 00:13:32.452 "compare_and_write": true, 00:13:32.452 "abort": true, 00:13:32.452 "nvme_admin": true, 00:13:32.452 "nvme_io": true 00:13:32.452 }, 00:13:32.452 "memory_domains": [ 00:13:32.452 { 00:13:32.452 "dma_device_id": "system", 00:13:32.452 "dma_device_type": 1 00:13:32.452 } 00:13:32.452 ], 00:13:32.452 "driver_specific": { 00:13:32.452 "nvme": [ 00:13:32.452 { 00:13:32.452 "trid": { 00:13:32.452 "trtype": "TCP", 00:13:32.452 "adrfam": "IPv4", 00:13:32.452 "traddr": "10.0.0.2", 00:13:32.452 "trsvcid": "4420", 00:13:32.452 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:32.452 }, 00:13:32.452 "ctrlr_data": { 00:13:32.452 "cntlid": 1, 00:13:32.452 "vendor_id": "0x8086", 00:13:32.452 "model_number": "SPDK bdev Controller", 00:13:32.452 "serial_number": "SPDK0", 00:13:32.452 "firmware_revision": "24.05", 00:13:32.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:32.452 "oacs": { 00:13:32.452 "security": 0, 00:13:32.452 "format": 0, 00:13:32.452 "firmware": 0, 00:13:32.452 "ns_manage": 0 00:13:32.452 }, 00:13:32.452 "multi_ctrlr": true, 00:13:32.452 "ana_reporting": false 00:13:32.452 }, 00:13:32.452 "vs": { 00:13:32.452 "nvme_version": "1.3" 00:13:32.452 }, 00:13:32.452 "ns_data": { 00:13:32.452 "id": 1, 00:13:32.452 "can_share": true 00:13:32.452 } 00:13:32.452 } 00:13:32.452 ], 00:13:32.452 "mp_policy": "active_passive" 00:13:32.453 } 00:13:32.453 } 00:13:32.453 ] 00:13:32.453 00:47:25 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1643840 00:13:32.453 00:47:25 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:32.453 00:47:25 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:32.453 Running I/O for 10 seconds... 00:13:33.832 Latency(us) 00:13:33.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.832 Nvme0n1 : 1.00 22019.00 86.01 0.00 0.00 0.00 0.00 0.00 00:13:33.832 =================================================================================================================== 00:13:33.832 Total : 22019.00 86.01 0.00 0.00 0.00 0.00 0.00 00:13:33.832 00:13:34.401 00:47:27 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:34.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.401 Nvme0n1 : 2.00 22128.50 86.44 0.00 0.00 0.00 0.00 0.00 00:13:34.401 =================================================================================================================== 00:13:34.401 Total : 22128.50 86.44 0.00 0.00 0.00 0.00 0.00 00:13:34.401 00:13:34.660 true 00:13:34.660 00:47:27 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:34.660 00:47:27 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:34.920 00:47:27 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:34.920 00:47:27 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:34.920 00:47:27 -- target/nvmf_lvs_grow.sh@65 -- # wait 1643840 00:13:35.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.489 Nvme0n1 : 3.00 22041.67 86.10 0.00 0.00 0.00 0.00 0.00 00:13:35.489 =================================================================================================================== 00:13:35.489 Total : 22041.67 86.10 0.00 0.00 0.00 0.00 0.00 00:13:35.489 00:13:36.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.428 Nvme0n1 : 4.00 22059.75 86.17 0.00 0.00 0.00 0.00 0.00 00:13:36.428 =================================================================================================================== 00:13:36.428 Total : 22059.75 86.17 0.00 0.00 0.00 0.00 0.00 00:13:36.428 00:13:37.810 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.810 Nvme0n1 : 5.00 22064.80 86.19 0.00 0.00 0.00 0.00 0.00 00:13:37.810 =================================================================================================================== 00:13:37.810 Total : 22064.80 86.19 0.00 0.00 0.00 0.00 0.00 00:13:37.810 00:13:38.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.748 Nvme0n1 : 6.00 22158.00 86.55 0.00 0.00 0.00 0.00 0.00 00:13:38.748 =================================================================================================================== 00:13:38.748 Total : 22158.00 86.55 0.00 0.00 0.00 0.00 0.00 00:13:38.748 00:13:39.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.687 Nvme0n1 : 7.00 22162.71 86.57 0.00 0.00 0.00 0.00 0.00 00:13:39.687 =================================================================================================================== 00:13:39.687 Total : 22162.71 86.57 0.00 0.00 0.00 0.00 0.00 00:13:39.687 00:13:40.623 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.623 Nvme0n1 : 8.00 22162.12 86.57 0.00 0.00 0.00 0.00 0.00 00:13:40.623 =================================================================================================================== 00:13:40.623 Total : 22162.12 86.57 0.00 0.00 0.00 0.00 0.00 00:13:40.623 00:13:41.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.560 Nvme0n1 : 9.00 22220.78 86.80 0.00 0.00 0.00 0.00 0.00 00:13:41.560 =================================================================================================================== 00:13:41.560 Total : 22220.78 86.80 0.00 0.00 0.00 0.00 0.00 00:13:41.560 00:13:42.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.521 Nvme0n1 : 10.00 22287.70 87.06 0.00 0.00 0.00 0.00 0.00 00:13:42.521 =================================================================================================================== 00:13:42.522 Total : 22287.70 87.06 0.00 0.00 0.00 0.00 0.00 00:13:42.522 00:13:42.522 00:13:42.522 Latency(us) 00:13:42.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.522 Nvme0n1 : 10.00 22291.04 87.07 0.00 0.00 5738.93 3034.60 27354.16 00:13:42.522 =================================================================================================================== 00:13:42.522 Total : 22291.04 87.07 0.00 0.00 5738.93 3034.60 27354.16 00:13:42.522 0 00:13:42.522 00:47:35 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1643602 00:13:42.522 00:47:35 -- common/autotest_common.sh@936 -- # '[' -z 1643602 ']' 00:13:42.522 00:47:35 -- common/autotest_common.sh@940 -- # kill -0 1643602 00:13:42.522 00:47:35 -- common/autotest_common.sh@941 -- # uname 00:13:42.522 00:47:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:42.522 00:47:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1643602 00:13:42.522 00:47:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:42.522 00:47:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:42.522 00:47:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1643602' 00:13:42.522 killing process with pid 1643602 00:13:42.522 00:47:35 -- common/autotest_common.sh@955 -- # kill 1643602 00:13:42.522 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.522 00:13:42.522 Latency(us) 00:13:42.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.522 =================================================================================================================== 00:13:42.522 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.522 00:47:35 -- common/autotest_common.sh@960 -- # wait 1643602 00:13:42.781 00:47:35 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:43.040 00:47:35 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:13:43.040 00:47:35 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:43.300 00:47:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:13:43.300 00:47:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:13:43.300 00:47:35 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1640488 00:13:43.300 00:47:35 -- target/nvmf_lvs_grow.sh@74 -- # wait 1640488 00:13:43.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1640488 Killed "${NVMF_APP[@]}" "$@" 00:13:43.300 00:47:35 -- target/nvmf_lvs_grow.sh@74 -- # true 00:13:43.300 00:47:35 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:13:43.300 00:47:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:43.300 00:47:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:43.300 00:47:35 -- common/autotest_common.sh@10 -- # set +x 00:13:43.300 00:47:35 -- nvmf/common.sh@470 -- # nvmfpid=1645676 00:13:43.300 00:47:35 -- nvmf/common.sh@471 -- # waitforlisten 1645676 00:13:43.300 00:47:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:43.300 00:47:35 -- common/autotest_common.sh@817 -- # '[' -z 1645676 ']' 00:13:43.300 00:47:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.300 00:47:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:43.300 00:47:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.300 00:47:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:43.300 00:47:35 -- common/autotest_common.sh@10 -- # set +x 00:13:43.300 [2024-04-27 00:47:35.831367] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:43.300 [2024-04-27 00:47:35.831411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.300 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.300 [2024-04-27 00:47:35.888677] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.300 [2024-04-27 00:47:35.965110] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.300 [2024-04-27 00:47:35.965144] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.300 [2024-04-27 00:47:35.965151] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.300 [2024-04-27 00:47:35.965158] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.300 [2024-04-27 00:47:35.965163] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.300 [2024-04-27 00:47:35.965184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.238 00:47:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:44.238 00:47:36 -- common/autotest_common.sh@850 -- # return 0 00:13:44.238 00:47:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:44.238 00:47:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:44.238 00:47:36 -- common/autotest_common.sh@10 -- # set +x 00:13:44.238 00:47:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.238 00:47:36 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:44.238 [2024-04-27 00:47:36.825677] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:44.238 [2024-04-27 00:47:36.825758] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:44.238 [2024-04-27 00:47:36.825781] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:44.238 00:47:36 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:13:44.238 00:47:36 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev d2b6810f-4fae-436a-b765-6042e9f0096a 00:13:44.238 00:47:36 -- common/autotest_common.sh@885 -- # local bdev_name=d2b6810f-4fae-436a-b765-6042e9f0096a 00:13:44.238 00:47:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:44.238 00:47:36 -- common/autotest_common.sh@887 -- # local i 00:13:44.238 00:47:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:44.238 00:47:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:44.238 00:47:36 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:44.497 00:47:37 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d2b6810f-4fae-436a-b765-6042e9f0096a -t 2000 00:13:44.497 [ 00:13:44.497 { 00:13:44.497 "name": "d2b6810f-4fae-436a-b765-6042e9f0096a", 00:13:44.497 "aliases": [ 00:13:44.497 "lvs/lvol" 00:13:44.497 ], 00:13:44.497 "product_name": "Logical Volume", 00:13:44.497 "block_size": 4096, 00:13:44.497 "num_blocks": 38912, 00:13:44.497 "uuid": "d2b6810f-4fae-436a-b765-6042e9f0096a", 00:13:44.497 "assigned_rate_limits": { 00:13:44.497 "rw_ios_per_sec": 0, 00:13:44.497 "rw_mbytes_per_sec": 0, 00:13:44.497 "r_mbytes_per_sec": 0, 00:13:44.497 "w_mbytes_per_sec": 0 00:13:44.497 }, 00:13:44.497 "claimed": false, 00:13:44.497 "zoned": false, 00:13:44.497 "supported_io_types": { 00:13:44.497 "read": true, 00:13:44.497 "write": true, 00:13:44.497 "unmap": true, 00:13:44.497 "write_zeroes": true, 00:13:44.497 "flush": false, 00:13:44.497 "reset": true, 00:13:44.497 "compare": false, 00:13:44.497 "compare_and_write": false, 00:13:44.497 "abort": false, 00:13:44.497 "nvme_admin": false, 00:13:44.497 "nvme_io": false 00:13:44.497 }, 00:13:44.497 "driver_specific": { 00:13:44.497 "lvol": { 00:13:44.497 "lvol_store_uuid": "8e8ac99a-68a2-41c2-8914-5b9fca985d0f", 00:13:44.497 "base_bdev": "aio_bdev", 00:13:44.497 "thin_provision": false, 00:13:44.497 "snapshot": false, 00:13:44.497 "clone": false, 00:13:44.497 "esnap_clone": false 00:13:44.497 } 00:13:44.497 } 00:13:44.497 } 00:13:44.497 ] 00:13:44.497 00:47:37 -- common/autotest_common.sh@893 -- # return 0 00:13:44.755 00:47:37 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:44.755 00:47:37 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:13:44.755 00:47:37 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:13:44.755 00:47:37 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:44.755 00:47:37 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:13:45.014 00:47:37 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:13:45.014 00:47:37 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:45.014 [2024-04-27 00:47:37.694160] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:45.272 00:47:37 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:45.272 00:47:37 -- common/autotest_common.sh@638 -- # local es=0 00:13:45.272 00:47:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:45.272 00:47:37 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.272 00:47:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:45.272 00:47:37 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.272 00:47:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:45.272 00:47:37 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.272 00:47:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:45.272 00:47:37 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.272 00:47:37 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:45.272 00:47:37 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:45.272 request: 00:13:45.272 { 00:13:45.272 "uuid": "8e8ac99a-68a2-41c2-8914-5b9fca985d0f", 00:13:45.272 "method": "bdev_lvol_get_lvstores", 00:13:45.272 "req_id": 1 00:13:45.272 } 00:13:45.272 Got JSON-RPC error response 00:13:45.272 response: 00:13:45.272 { 00:13:45.272 "code": -19, 00:13:45.272 "message": "No such device" 00:13:45.272 } 00:13:45.272 00:47:37 -- common/autotest_common.sh@641 -- # es=1 00:13:45.272 00:47:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:45.272 00:47:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:45.272 00:47:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:45.272 00:47:37 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:45.531 aio_bdev 00:13:45.531 00:47:38 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d2b6810f-4fae-436a-b765-6042e9f0096a 00:13:45.531 00:47:38 -- common/autotest_common.sh@885 -- # local bdev_name=d2b6810f-4fae-436a-b765-6042e9f0096a 00:13:45.531 00:47:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:45.531 00:47:38 -- common/autotest_common.sh@887 -- # local i 00:13:45.531 00:47:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:45.531 00:47:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:45.531 00:47:38 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:45.790 00:47:38 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d2b6810f-4fae-436a-b765-6042e9f0096a -t 2000 00:13:45.790 [ 00:13:45.790 { 00:13:45.790 "name": "d2b6810f-4fae-436a-b765-6042e9f0096a", 00:13:45.790 "aliases": [ 00:13:45.790 "lvs/lvol" 00:13:45.790 ], 00:13:45.790 "product_name": "Logical Volume", 00:13:45.790 "block_size": 4096, 00:13:45.790 "num_blocks": 38912, 00:13:45.790 "uuid": "d2b6810f-4fae-436a-b765-6042e9f0096a", 00:13:45.790 "assigned_rate_limits": { 00:13:45.790 "rw_ios_per_sec": 0, 00:13:45.790 "rw_mbytes_per_sec": 0, 00:13:45.790 "r_mbytes_per_sec": 0, 00:13:45.790 "w_mbytes_per_sec": 0 00:13:45.790 }, 00:13:45.790 "claimed": false, 00:13:45.790 "zoned": false, 00:13:45.790 "supported_io_types": { 00:13:45.790 "read": true, 00:13:45.790 "write": true, 00:13:45.790 "unmap": true, 00:13:45.790 "write_zeroes": true, 00:13:45.790 "flush": false, 00:13:45.790 "reset": true, 00:13:45.790 "compare": false, 00:13:45.790 "compare_and_write": false, 00:13:45.790 "abort": false, 00:13:45.790 "nvme_admin": false, 00:13:45.790 "nvme_io": false 00:13:45.790 }, 00:13:45.790 "driver_specific": { 00:13:45.790 "lvol": { 00:13:45.790 "lvol_store_uuid": "8e8ac99a-68a2-41c2-8914-5b9fca985d0f", 00:13:45.790 "base_bdev": "aio_bdev", 00:13:45.790 "thin_provision": false, 00:13:45.790 "snapshot": false, 00:13:45.790 "clone": false, 00:13:45.790 "esnap_clone": false 00:13:45.790 } 00:13:45.790 } 00:13:45.790 } 00:13:45.790 ] 00:13:45.790 00:47:38 -- common/autotest_common.sh@893 -- # return 0 00:13:45.790 00:47:38 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:45.790 00:47:38 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:13:46.049 00:47:38 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:13:46.049 00:47:38 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:46.049 00:47:38 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:13:46.308 00:47:38 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:13:46.308 00:47:38 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d2b6810f-4fae-436a-b765-6042e9f0096a 00:13:46.308 00:47:38 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8e8ac99a-68a2-41c2-8914-5b9fca985d0f 00:13:46.567 00:47:39 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:46.826 00:47:39 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:46.826 00:13:46.826 real 0m17.290s 00:13:46.826 user 0m44.626s 00:13:46.826 sys 0m3.759s 00:13:46.826 00:47:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:46.826 00:47:39 -- common/autotest_common.sh@10 -- # set +x 00:13:46.826 ************************************ 00:13:46.826 END TEST lvs_grow_dirty 00:13:46.826 ************************************ 00:13:46.826 00:47:39 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:46.826 00:47:39 -- common/autotest_common.sh@794 -- # type=--id 00:13:46.826 00:47:39 -- common/autotest_common.sh@795 -- # id=0 00:13:46.826 00:47:39 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:13:46.826 00:47:39 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:46.826 00:47:39 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:13:46.826 00:47:39 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:13:46.826 00:47:39 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:13:46.826 00:47:39 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:46.826 nvmf_trace.0 00:13:46.826 00:47:39 -- common/autotest_common.sh@809 -- # return 0 00:13:46.826 00:47:39 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:46.826 00:47:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:46.826 00:47:39 -- nvmf/common.sh@117 -- # sync 00:13:46.826 00:47:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.826 00:47:39 -- nvmf/common.sh@120 -- # set +e 00:13:46.826 00:47:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.826 00:47:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.826 rmmod nvme_tcp 00:13:46.826 rmmod nvme_fabrics 00:13:46.826 rmmod nvme_keyring 00:13:46.826 00:47:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.826 00:47:39 -- nvmf/common.sh@124 -- # set -e 00:13:46.826 00:47:39 -- nvmf/common.sh@125 -- # return 0 00:13:46.826 00:47:39 -- nvmf/common.sh@478 -- # '[' -n 1645676 ']' 00:13:46.826 00:47:39 -- nvmf/common.sh@479 -- # killprocess 1645676 00:13:46.826 00:47:39 -- common/autotest_common.sh@936 -- # '[' -z 1645676 ']' 00:13:46.826 00:47:39 -- common/autotest_common.sh@940 -- # kill -0 1645676 00:13:46.826 00:47:39 -- common/autotest_common.sh@941 -- # uname 00:13:46.826 00:47:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:46.826 00:47:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1645676 00:13:46.826 00:47:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:46.826 00:47:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:46.826 00:47:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1645676' 00:13:46.826 killing process with pid 1645676 00:13:46.826 00:47:39 -- common/autotest_common.sh@955 -- # kill 1645676 00:13:46.826 00:47:39 -- common/autotest_common.sh@960 -- # wait 1645676 00:13:47.120 00:47:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:47.120 00:47:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:47.120 00:47:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:47.120 00:47:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.120 00:47:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:47.120 00:47:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.120 00:47:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.120 00:47:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.673 00:47:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.673 00:13:49.673 real 0m42.225s 00:13:49.673 user 1m5.886s 00:13:49.673 sys 0m9.642s 00:13:49.673 00:47:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:49.673 00:47:41 -- common/autotest_common.sh@10 -- # set +x 00:13:49.673 ************************************ 00:13:49.673 END TEST nvmf_lvs_grow 00:13:49.673 ************************************ 00:13:49.673 00:47:41 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:49.673 00:47:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:49.673 00:47:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:49.673 00:47:41 -- common/autotest_common.sh@10 -- # set +x 00:13:49.673 ************************************ 00:13:49.673 START TEST nvmf_bdev_io_wait 00:13:49.673 ************************************ 00:13:49.673 00:47:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:49.673 * Looking for test storage... 00:13:49.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.673 00:47:42 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.673 00:47:42 -- nvmf/common.sh@7 -- # uname -s 00:13:49.673 00:47:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.673 00:47:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.673 00:47:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.673 00:47:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.673 00:47:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.673 00:47:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.673 00:47:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.673 00:47:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.673 00:47:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.673 00:47:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.673 00:47:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:49.673 00:47:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:49.673 00:47:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.673 00:47:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.673 00:47:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.673 00:47:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.673 00:47:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.673 00:47:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.673 00:47:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.673 00:47:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.673 00:47:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.673 00:47:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.673 00:47:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.673 00:47:42 -- paths/export.sh@5 -- # export PATH 00:13:49.673 00:47:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.673 00:47:42 -- nvmf/common.sh@47 -- # : 0 00:13:49.673 00:47:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.673 00:47:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.673 00:47:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.673 00:47:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.673 00:47:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.673 00:47:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.673 00:47:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.673 00:47:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.673 00:47:42 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.673 00:47:42 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.673 00:47:42 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:49.673 00:47:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:49.673 00:47:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.673 00:47:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:49.673 00:47:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:49.673 00:47:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:49.673 00:47:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.673 00:47:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.673 00:47:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.673 00:47:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:49.673 00:47:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:49.673 00:47:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.673 00:47:42 -- common/autotest_common.sh@10 -- # set +x 00:13:54.950 00:47:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:54.950 00:47:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.950 00:47:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.950 00:47:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.950 00:47:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.950 00:47:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.950 00:47:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.950 00:47:47 -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.950 00:47:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.950 00:47:47 -- nvmf/common.sh@296 -- # e810=() 00:13:54.950 00:47:47 -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.950 00:47:47 -- nvmf/common.sh@297 -- # x722=() 00:13:54.950 00:47:47 -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.950 00:47:47 -- nvmf/common.sh@298 -- # mlx=() 00:13:54.950 00:47:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.950 00:47:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.950 00:47:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.950 00:47:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:54.950 00:47:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.950 00:47:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.950 00:47:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:54.950 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:54.950 00:47:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.950 00:47:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:54.950 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:54.950 00:47:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.950 00:47:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.950 00:47:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.950 00:47:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:54.950 00:47:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.950 00:47:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:54.950 Found net devices under 0000:86:00.0: cvl_0_0 00:13:54.950 00:47:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.950 00:47:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.950 00:47:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.950 00:47:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:54.950 00:47:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.950 00:47:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:54.950 Found net devices under 0000:86:00.1: cvl_0_1 00:13:54.950 00:47:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.950 00:47:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:54.950 00:47:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:54.950 00:47:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:54.950 00:47:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.950 00:47:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.950 00:47:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.950 00:47:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:54.950 00:47:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.950 00:47:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.950 00:47:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:54.950 00:47:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.950 00:47:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.950 00:47:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:54.950 00:47:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:54.950 00:47:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.950 00:47:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.950 00:47:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.950 00:47:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.950 00:47:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.950 00:47:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.950 00:47:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.950 00:47:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.950 00:47:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:13:54.950 00:13:54.950 --- 10.0.0.2 ping statistics --- 00:13:54.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.950 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:13:54.950 00:47:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.413 ms 00:13:54.950 00:13:54.950 --- 10.0.0.1 ping statistics --- 00:13:54.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.950 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:13:54.950 00:47:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.950 00:47:47 -- nvmf/common.sh@411 -- # return 0 00:13:54.950 00:47:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:54.950 00:47:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.950 00:47:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:54.950 00:47:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.950 00:47:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:54.950 00:47:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:54.950 00:47:47 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:54.950 00:47:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:54.950 00:47:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:54.950 00:47:47 -- common/autotest_common.sh@10 -- # set +x 00:13:54.950 00:47:47 -- nvmf/common.sh@470 -- # nvmfpid=1649742 00:13:54.950 00:47:47 -- nvmf/common.sh@471 -- # waitforlisten 1649742 00:13:54.950 00:47:47 -- common/autotest_common.sh@817 -- # '[' -z 1649742 ']' 00:13:54.950 00:47:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.950 00:47:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:54.950 00:47:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.950 00:47:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:54.950 00:47:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:54.950 00:47:47 -- common/autotest_common.sh@10 -- # set +x 00:13:54.950 [2024-04-27 00:47:47.402180] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:54.950 [2024-04-27 00:47:47.402223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.950 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.950 [2024-04-27 00:47:47.458542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.950 [2024-04-27 00:47:47.537703] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.950 [2024-04-27 00:47:47.537739] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.950 [2024-04-27 00:47:47.537746] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.950 [2024-04-27 00:47:47.537753] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.950 [2024-04-27 00:47:47.537758] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.950 [2024-04-27 00:47:47.537793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.950 [2024-04-27 00:47:47.537811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.950 [2024-04-27 00:47:47.537902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.950 [2024-04-27 00:47:47.537904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.519 00:47:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:55.519 00:47:48 -- common/autotest_common.sh@850 -- # return 0 00:13:55.519 00:47:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:55.519 00:47:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:55.519 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 00:47:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:55.779 00:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.779 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 00:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:55.779 00:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.779 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 00:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:55.779 00:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.779 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 [2024-04-27 00:47:48.320462] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.779 00:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:55.779 00:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.779 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 Malloc0 00:13:55.779 00:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:55.779 00:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.779 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 00:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:55.779 00:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.779 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 00:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.779 00:47:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:55.779 00:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:55.779 [2024-04-27 00:47:48.382938] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.779 00:47:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1649988 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@30 -- # READ_PID=1649990 00:13:55.779 00:47:48 -- nvmf/common.sh@521 -- # config=() 00:13:55.779 00:47:48 -- nvmf/common.sh@521 -- # local subsystem config 00:13:55.779 00:47:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:55.779 00:47:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:55.779 { 00:13:55.779 "params": { 00:13:55.779 "name": "Nvme$subsystem", 00:13:55.779 "trtype": "$TEST_TRANSPORT", 00:13:55.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.779 "adrfam": "ipv4", 00:13:55.779 "trsvcid": "$NVMF_PORT", 00:13:55.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.779 "hdgst": ${hdgst:-false}, 00:13:55.779 "ddgst": ${ddgst:-false} 00:13:55.779 }, 00:13:55.779 "method": "bdev_nvme_attach_controller" 00:13:55.779 } 00:13:55.779 EOF 00:13:55.779 )") 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1649992 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:55.779 00:47:48 -- nvmf/common.sh@521 -- # config=() 00:13:55.779 00:47:48 -- nvmf/common.sh@521 -- # local subsystem config 00:13:55.779 00:47:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:55.779 00:47:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:55.779 { 00:13:55.779 "params": { 00:13:55.779 "name": "Nvme$subsystem", 00:13:55.779 "trtype": "$TEST_TRANSPORT", 00:13:55.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.779 "adrfam": "ipv4", 00:13:55.779 "trsvcid": "$NVMF_PORT", 00:13:55.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.779 "hdgst": ${hdgst:-false}, 00:13:55.779 "ddgst": ${ddgst:-false} 00:13:55.779 }, 00:13:55.779 "method": "bdev_nvme_attach_controller" 00:13:55.779 } 00:13:55.779 EOF 00:13:55.779 )") 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1649995 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@35 -- # sync 00:13:55.779 00:47:48 -- nvmf/common.sh@543 -- # cat 00:13:55.779 00:47:48 -- nvmf/common.sh@521 -- # config=() 00:13:55.779 00:47:48 -- nvmf/common.sh@521 -- # local subsystem config 00:13:55.779 00:47:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:55.779 00:47:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:55.779 { 00:13:55.779 "params": { 00:13:55.779 "name": "Nvme$subsystem", 00:13:55.779 "trtype": "$TEST_TRANSPORT", 00:13:55.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.779 "adrfam": "ipv4", 00:13:55.779 "trsvcid": "$NVMF_PORT", 00:13:55.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.779 "hdgst": ${hdgst:-false}, 00:13:55.779 "ddgst": ${ddgst:-false} 00:13:55.779 }, 00:13:55.779 "method": "bdev_nvme_attach_controller" 00:13:55.779 } 00:13:55.779 EOF 00:13:55.779 )") 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:55.779 00:47:48 -- nvmf/common.sh@521 -- # config=() 00:13:55.779 00:47:48 -- nvmf/common.sh@543 -- # cat 00:13:55.779 00:47:48 -- nvmf/common.sh@521 -- # local subsystem config 00:13:55.779 00:47:48 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:55.779 00:47:48 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:55.779 { 00:13:55.779 "params": { 00:13:55.779 "name": "Nvme$subsystem", 00:13:55.779 "trtype": "$TEST_TRANSPORT", 00:13:55.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.779 "adrfam": "ipv4", 00:13:55.779 "trsvcid": "$NVMF_PORT", 00:13:55.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.779 "hdgst": ${hdgst:-false}, 00:13:55.779 "ddgst": ${ddgst:-false} 00:13:55.779 }, 00:13:55.779 "method": "bdev_nvme_attach_controller" 00:13:55.779 } 00:13:55.779 EOF 00:13:55.779 )") 00:13:55.779 00:47:48 -- nvmf/common.sh@543 -- # cat 00:13:55.779 00:47:48 -- target/bdev_io_wait.sh@37 -- # wait 1649988 00:13:55.779 00:47:48 -- nvmf/common.sh@543 -- # cat 00:13:55.779 00:47:48 -- nvmf/common.sh@545 -- # jq . 00:13:55.779 00:47:48 -- nvmf/common.sh@545 -- # jq . 00:13:55.779 00:47:48 -- nvmf/common.sh@545 -- # jq . 00:13:55.779 00:47:48 -- nvmf/common.sh@546 -- # IFS=, 00:13:55.779 00:47:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:55.779 "params": { 00:13:55.779 "name": "Nvme1", 00:13:55.779 "trtype": "tcp", 00:13:55.779 "traddr": "10.0.0.2", 00:13:55.779 "adrfam": "ipv4", 00:13:55.779 "trsvcid": "4420", 00:13:55.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.779 "hdgst": false, 00:13:55.779 "ddgst": false 00:13:55.779 }, 00:13:55.779 "method": "bdev_nvme_attach_controller" 00:13:55.779 }' 00:13:55.779 00:47:48 -- nvmf/common.sh@545 -- # jq . 00:13:55.779 00:47:48 -- nvmf/common.sh@546 -- # IFS=, 00:13:55.779 00:47:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:55.779 "params": { 00:13:55.779 "name": "Nvme1", 00:13:55.779 "trtype": "tcp", 00:13:55.779 "traddr": "10.0.0.2", 00:13:55.779 "adrfam": "ipv4", 00:13:55.779 "trsvcid": "4420", 00:13:55.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.780 "hdgst": false, 00:13:55.780 "ddgst": false 00:13:55.780 }, 00:13:55.780 "method": "bdev_nvme_attach_controller" 00:13:55.780 }' 00:13:55.780 00:47:48 -- nvmf/common.sh@546 -- # IFS=, 00:13:55.780 00:47:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:55.780 "params": { 00:13:55.780 "name": "Nvme1", 00:13:55.780 "trtype": "tcp", 00:13:55.780 "traddr": "10.0.0.2", 00:13:55.780 "adrfam": "ipv4", 00:13:55.780 "trsvcid": "4420", 00:13:55.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.780 "hdgst": false, 00:13:55.780 "ddgst": false 00:13:55.780 }, 00:13:55.780 "method": "bdev_nvme_attach_controller" 00:13:55.780 }' 00:13:55.780 00:47:48 -- nvmf/common.sh@546 -- # IFS=, 00:13:55.780 00:47:48 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:55.780 "params": { 00:13:55.780 "name": "Nvme1", 00:13:55.780 "trtype": "tcp", 00:13:55.780 "traddr": "10.0.0.2", 00:13:55.780 "adrfam": "ipv4", 00:13:55.780 "trsvcid": "4420", 00:13:55.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.780 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.780 "hdgst": false, 00:13:55.780 "ddgst": false 00:13:55.780 }, 00:13:55.780 "method": "bdev_nvme_attach_controller" 00:13:55.780 }' 00:13:55.780 [2024-04-27 00:47:48.433459] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:55.780 [2024-04-27 00:47:48.433507] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:55.780 [2024-04-27 00:47:48.433767] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:55.780 [2024-04-27 00:47:48.433775] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:55.780 [2024-04-27 00:47:48.433810] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:55.780 [2024-04-27 00:47:48.433813] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:55.780 [2024-04-27 00:47:48.434441] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:55.780 [2024-04-27 00:47:48.434482] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:56.040 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.040 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.040 [2024-04-27 00:47:48.624141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.040 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.040 [2024-04-27 00:47:48.676920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.040 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.040 [2024-04-27 00:47:48.708729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:56.040 [2024-04-27 00:47:48.719555] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.299 [2024-04-27 00:47:48.752848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:13:56.299 [2024-04-27 00:47:48.789078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:56.299 [2024-04-27 00:47:48.836518] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.299 [2024-04-27 00:47:48.923061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:56.299 Running I/O for 1 seconds... 00:13:56.299 Running I/O for 1 seconds... 00:13:56.559 Running I/O for 1 seconds... 00:13:56.559 Running I/O for 1 seconds... 00:13:57.505 00:13:57.505 Latency(us) 00:13:57.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.506 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:57.506 Nvme1n1 : 1.01 11147.66 43.55 0.00 0.00 11444.45 5641.79 31457.28 00:13:57.506 =================================================================================================================== 00:13:57.506 Total : 11147.66 43.55 0.00 0.00 11444.45 5641.79 31457.28 00:13:57.506 00:13:57.506 Latency(us) 00:13:57.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.506 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:57.506 Nvme1n1 : 1.01 11046.06 43.15 0.00 0.00 11539.93 3219.81 16070.57 00:13:57.506 =================================================================================================================== 00:13:57.506 Total : 11046.06 43.15 0.00 0.00 11539.93 3219.81 16070.57 00:13:57.506 00:13:57.506 Latency(us) 00:13:57.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.506 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:57.506 Nvme1n1 : 1.00 251798.28 983.59 0.00 0.00 506.12 203.02 1681.14 00:13:57.506 =================================================================================================================== 00:13:57.506 Total : 251798.28 983.59 0.00 0.00 506.12 203.02 1681.14 00:13:57.506 00:13:57.506 Latency(us) 00:13:57.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.506 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:57.506 Nvme1n1 : 1.01 10428.76 40.74 0.00 0.00 12226.93 6012.22 22795.13 00:13:57.506 =================================================================================================================== 00:13:57.506 Total : 10428.76 40.74 0.00 0.00 12226.93 6012.22 22795.13 00:13:57.769 00:47:50 -- target/bdev_io_wait.sh@38 -- # wait 1649990 00:13:57.769 00:47:50 -- target/bdev_io_wait.sh@39 -- # wait 1649992 00:13:57.769 00:47:50 -- target/bdev_io_wait.sh@40 -- # wait 1649995 00:13:57.769 00:47:50 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:57.769 00:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:57.770 00:47:50 -- common/autotest_common.sh@10 -- # set +x 00:13:57.770 00:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:57.770 00:47:50 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:57.770 00:47:50 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:57.770 00:47:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:57.770 00:47:50 -- nvmf/common.sh@117 -- # sync 00:13:57.770 00:47:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:57.770 00:47:50 -- nvmf/common.sh@120 -- # set +e 00:13:57.770 00:47:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.770 00:47:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:57.770 rmmod nvme_tcp 00:13:57.770 rmmod nvme_fabrics 00:13:58.028 rmmod nvme_keyring 00:13:58.028 00:47:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.028 00:47:50 -- nvmf/common.sh@124 -- # set -e 00:13:58.028 00:47:50 -- nvmf/common.sh@125 -- # return 0 00:13:58.028 00:47:50 -- nvmf/common.sh@478 -- # '[' -n 1649742 ']' 00:13:58.028 00:47:50 -- nvmf/common.sh@479 -- # killprocess 1649742 00:13:58.028 00:47:50 -- common/autotest_common.sh@936 -- # '[' -z 1649742 ']' 00:13:58.028 00:47:50 -- common/autotest_common.sh@940 -- # kill -0 1649742 00:13:58.028 00:47:50 -- common/autotest_common.sh@941 -- # uname 00:13:58.028 00:47:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:58.028 00:47:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1649742 00:13:58.028 00:47:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:58.028 00:47:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:58.028 00:47:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1649742' 00:13:58.028 killing process with pid 1649742 00:13:58.028 00:47:50 -- common/autotest_common.sh@955 -- # kill 1649742 00:13:58.029 00:47:50 -- common/autotest_common.sh@960 -- # wait 1649742 00:13:58.288 00:47:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:58.288 00:47:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:58.288 00:47:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:58.288 00:47:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.288 00:47:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.288 00:47:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.288 00:47:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.288 00:47:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.195 00:47:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.195 00:14:00.195 real 0m10.849s 00:14:00.195 user 0m19.758s 00:14:00.195 sys 0m5.597s 00:14:00.195 00:47:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:00.195 00:47:52 -- common/autotest_common.sh@10 -- # set +x 00:14:00.195 ************************************ 00:14:00.195 END TEST nvmf_bdev_io_wait 00:14:00.195 ************************************ 00:14:00.195 00:47:52 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:00.195 00:47:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:00.195 00:47:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.195 00:47:52 -- common/autotest_common.sh@10 -- # set +x 00:14:00.453 ************************************ 00:14:00.453 START TEST nvmf_queue_depth 00:14:00.453 ************************************ 00:14:00.453 00:47:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:00.453 * Looking for test storage... 00:14:00.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.453 00:47:53 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.453 00:47:53 -- nvmf/common.sh@7 -- # uname -s 00:14:00.453 00:47:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.453 00:47:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.453 00:47:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.454 00:47:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.454 00:47:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.454 00:47:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.454 00:47:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.454 00:47:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.454 00:47:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.454 00:47:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.454 00:47:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:00.454 00:47:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:00.454 00:47:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.454 00:47:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.454 00:47:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.454 00:47:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.454 00:47:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.454 00:47:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.454 00:47:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.454 00:47:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.454 00:47:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.454 00:47:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.454 00:47:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.454 00:47:53 -- paths/export.sh@5 -- # export PATH 00:14:00.454 00:47:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.454 00:47:53 -- nvmf/common.sh@47 -- # : 0 00:14:00.454 00:47:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.454 00:47:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.454 00:47:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.454 00:47:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.454 00:47:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.454 00:47:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.454 00:47:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.454 00:47:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.454 00:47:53 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:00.454 00:47:53 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:00.454 00:47:53 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:00.454 00:47:53 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:00.454 00:47:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:00.454 00:47:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.454 00:47:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:00.454 00:47:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:00.454 00:47:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:00.454 00:47:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.454 00:47:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.454 00:47:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.454 00:47:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:00.454 00:47:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:00.454 00:47:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.454 00:47:53 -- common/autotest_common.sh@10 -- # set +x 00:14:05.766 00:47:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:05.766 00:47:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.766 00:47:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.766 00:47:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.766 00:47:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.766 00:47:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.766 00:47:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.766 00:47:57 -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.766 00:47:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.766 00:47:57 -- nvmf/common.sh@296 -- # e810=() 00:14:05.766 00:47:57 -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.766 00:47:57 -- nvmf/common.sh@297 -- # x722=() 00:14:05.766 00:47:57 -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.766 00:47:57 -- nvmf/common.sh@298 -- # mlx=() 00:14:05.766 00:47:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.766 00:47:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.766 00:47:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.766 00:47:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.766 00:47:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:05.766 00:47:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:05.766 00:47:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:05.766 00:47:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.767 00:47:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.767 00:47:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:05.767 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:05.767 00:47:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.767 00:47:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:05.767 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:05.767 00:47:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.767 00:47:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.767 00:47:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.767 00:47:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:05.767 00:47:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.767 00:47:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:05.767 Found net devices under 0000:86:00.0: cvl_0_0 00:14:05.767 00:47:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.767 00:47:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.767 00:47:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.767 00:47:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:05.767 00:47:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.767 00:47:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:05.767 Found net devices under 0000:86:00.1: cvl_0_1 00:14:05.767 00:47:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.767 00:47:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:05.767 00:47:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:05.767 00:47:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:05.767 00:47:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:05.767 00:47:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.767 00:47:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.767 00:47:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.767 00:47:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.767 00:47:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.767 00:47:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.767 00:47:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.767 00:47:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.767 00:47:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.767 00:47:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.767 00:47:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.767 00:47:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.767 00:47:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.767 00:47:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.767 00:47:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.767 00:47:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.767 00:47:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.767 00:47:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.767 00:47:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.767 00:47:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:14:05.767 00:14:05.767 --- 10.0.0.2 ping statistics --- 00:14:05.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.767 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:14:05.767 00:47:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:14:05.767 00:14:05.767 --- 10.0.0.1 ping statistics --- 00:14:05.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.767 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:14:05.767 00:47:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.767 00:47:58 -- nvmf/common.sh@411 -- # return 0 00:14:05.767 00:47:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:05.767 00:47:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.767 00:47:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:05.767 00:47:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:05.767 00:47:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.767 00:47:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:05.767 00:47:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:05.767 00:47:58 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:05.767 00:47:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:05.767 00:47:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:05.767 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:14:05.767 00:47:58 -- nvmf/common.sh@470 -- # nvmfpid=1653770 00:14:05.767 00:47:58 -- nvmf/common.sh@471 -- # waitforlisten 1653770 00:14:05.767 00:47:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:05.767 00:47:58 -- common/autotest_common.sh@817 -- # '[' -z 1653770 ']' 00:14:05.767 00:47:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.767 00:47:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:05.767 00:47:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.767 00:47:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:05.767 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:14:05.767 [2024-04-27 00:47:58.298026] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:14:05.767 [2024-04-27 00:47:58.298089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.767 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.767 [2024-04-27 00:47:58.356659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.767 [2024-04-27 00:47:58.431510] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.767 [2024-04-27 00:47:58.431549] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.767 [2024-04-27 00:47:58.431555] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.767 [2024-04-27 00:47:58.431562] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.767 [2024-04-27 00:47:58.431567] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.767 [2024-04-27 00:47:58.431583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.704 00:47:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:06.704 00:47:59 -- common/autotest_common.sh@850 -- # return 0 00:14:06.704 00:47:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:06.704 00:47:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:06.704 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:14:06.704 00:47:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.704 00:47:59 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.704 00:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.704 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:14:06.704 [2024-04-27 00:47:59.138949] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.704 00:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.704 00:47:59 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:06.704 00:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.704 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:14:06.704 Malloc0 00:14:06.704 00:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.704 00:47:59 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:06.705 00:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.705 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:14:06.705 00:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.705 00:47:59 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:06.705 00:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.705 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:14:06.705 00:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.705 00:47:59 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.705 00:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.705 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:14:06.705 [2024-04-27 00:47:59.193222] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.705 00:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.705 00:47:59 -- target/queue_depth.sh@30 -- # bdevperf_pid=1654018 00:14:06.705 00:47:59 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:06.705 00:47:59 -- target/queue_depth.sh@33 -- # waitforlisten 1654018 /var/tmp/bdevperf.sock 00:14:06.705 00:47:59 -- common/autotest_common.sh@817 -- # '[' -z 1654018 ']' 00:14:06.705 00:47:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.705 00:47:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:06.705 00:47:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.705 00:47:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:06.705 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:14:06.705 00:47:59 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:06.705 [2024-04-27 00:47:59.239196] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:14:06.705 [2024-04-27 00:47:59.239239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654018 ] 00:14:06.705 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.705 [2024-04-27 00:47:59.292947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.705 [2024-04-27 00:47:59.363571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.642 00:48:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:07.642 00:48:00 -- common/autotest_common.sh@850 -- # return 0 00:14:07.642 00:48:00 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:07.642 00:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.642 00:48:00 -- common/autotest_common.sh@10 -- # set +x 00:14:07.642 NVMe0n1 00:14:07.642 00:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.642 00:48:00 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:07.642 Running I/O for 10 seconds... 00:14:17.618 00:14:17.618 Latency(us) 00:14:17.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.618 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:17.618 Verification LBA range: start 0x0 length 0x4000 00:14:17.618 NVMe0n1 : 10.06 12122.27 47.35 0.00 0.00 84203.40 16526.47 61546.85 00:14:17.618 =================================================================================================================== 00:14:17.618 Total : 12122.27 47.35 0.00 0.00 84203.40 16526.47 61546.85 00:14:17.618 0 00:14:17.877 00:48:10 -- target/queue_depth.sh@39 -- # killprocess 1654018 00:14:17.877 00:48:10 -- common/autotest_common.sh@936 -- # '[' -z 1654018 ']' 00:14:17.877 00:48:10 -- common/autotest_common.sh@940 -- # kill -0 1654018 00:14:17.877 00:48:10 -- common/autotest_common.sh@941 -- # uname 00:14:17.877 00:48:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:17.877 00:48:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1654018 00:14:17.877 00:48:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:17.877 00:48:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:17.877 00:48:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1654018' 00:14:17.877 killing process with pid 1654018 00:14:17.877 00:48:10 -- common/autotest_common.sh@955 -- # kill 1654018 00:14:17.877 Received shutdown signal, test time was about 10.000000 seconds 00:14:17.877 00:14:17.877 Latency(us) 00:14:17.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.877 =================================================================================================================== 00:14:17.877 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:17.877 00:48:10 -- common/autotest_common.sh@960 -- # wait 1654018 00:14:18.136 00:48:10 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:18.136 00:48:10 -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:18.136 00:48:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:18.136 00:48:10 -- nvmf/common.sh@117 -- # sync 00:14:18.136 00:48:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.136 00:48:10 -- nvmf/common.sh@120 -- # set +e 00:14:18.136 00:48:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.136 00:48:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.136 rmmod nvme_tcp 00:14:18.136 rmmod nvme_fabrics 00:14:18.136 rmmod nvme_keyring 00:14:18.136 00:48:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.136 00:48:10 -- nvmf/common.sh@124 -- # set -e 00:14:18.136 00:48:10 -- nvmf/common.sh@125 -- # return 0 00:14:18.136 00:48:10 -- nvmf/common.sh@478 -- # '[' -n 1653770 ']' 00:14:18.136 00:48:10 -- nvmf/common.sh@479 -- # killprocess 1653770 00:14:18.136 00:48:10 -- common/autotest_common.sh@936 -- # '[' -z 1653770 ']' 00:14:18.136 00:48:10 -- common/autotest_common.sh@940 -- # kill -0 1653770 00:14:18.136 00:48:10 -- common/autotest_common.sh@941 -- # uname 00:14:18.136 00:48:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:18.136 00:48:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1653770 00:14:18.136 00:48:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:18.136 00:48:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:18.136 00:48:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1653770' 00:14:18.136 killing process with pid 1653770 00:14:18.136 00:48:10 -- common/autotest_common.sh@955 -- # kill 1653770 00:14:18.136 00:48:10 -- common/autotest_common.sh@960 -- # wait 1653770 00:14:18.394 00:48:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:18.395 00:48:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:18.395 00:48:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:18.395 00:48:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:18.395 00:48:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:18.395 00:48:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.395 00:48:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.395 00:48:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.299 00:48:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:20.299 00:14:20.299 real 0m19.990s 00:14:20.299 user 0m24.684s 00:14:20.299 sys 0m5.466s 00:14:20.299 00:48:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:20.299 00:48:12 -- common/autotest_common.sh@10 -- # set +x 00:14:20.299 ************************************ 00:14:20.299 END TEST nvmf_queue_depth 00:14:20.299 ************************************ 00:14:20.560 00:48:13 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:20.560 00:48:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:20.560 00:48:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:20.560 00:48:13 -- common/autotest_common.sh@10 -- # set +x 00:14:20.560 ************************************ 00:14:20.560 START TEST nvmf_multipath 00:14:20.560 ************************************ 00:14:20.560 00:48:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:20.560 * Looking for test storage... 00:14:20.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.560 00:48:13 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.560 00:48:13 -- nvmf/common.sh@7 -- # uname -s 00:14:20.560 00:48:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.560 00:48:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.560 00:48:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.560 00:48:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.561 00:48:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.561 00:48:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.561 00:48:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.561 00:48:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.561 00:48:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.561 00:48:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.561 00:48:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.561 00:48:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.561 00:48:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.561 00:48:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.561 00:48:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.561 00:48:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.561 00:48:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.561 00:48:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.561 00:48:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.561 00:48:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.561 00:48:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.561 00:48:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.561 00:48:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.561 00:48:13 -- paths/export.sh@5 -- # export PATH 00:14:20.561 00:48:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.561 00:48:13 -- nvmf/common.sh@47 -- # : 0 00:14:20.561 00:48:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.561 00:48:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.561 00:48:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.561 00:48:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.561 00:48:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.561 00:48:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.561 00:48:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.561 00:48:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.821 00:48:13 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:20.821 00:48:13 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:20.821 00:48:13 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:20.821 00:48:13 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.821 00:48:13 -- target/multipath.sh@43 -- # nvmftestinit 00:14:20.821 00:48:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:20.821 00:48:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.821 00:48:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:20.821 00:48:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:20.821 00:48:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:20.821 00:48:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.821 00:48:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.821 00:48:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.821 00:48:13 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:20.821 00:48:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:20.821 00:48:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:20.821 00:48:13 -- common/autotest_common.sh@10 -- # set +x 00:14:26.094 00:48:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:26.094 00:48:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:26.094 00:48:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:26.094 00:48:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:26.094 00:48:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:26.094 00:48:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:26.094 00:48:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:26.094 00:48:18 -- nvmf/common.sh@295 -- # net_devs=() 00:14:26.094 00:48:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:26.094 00:48:18 -- nvmf/common.sh@296 -- # e810=() 00:14:26.094 00:48:18 -- nvmf/common.sh@296 -- # local -ga e810 00:14:26.094 00:48:18 -- nvmf/common.sh@297 -- # x722=() 00:14:26.094 00:48:18 -- nvmf/common.sh@297 -- # local -ga x722 00:14:26.094 00:48:18 -- nvmf/common.sh@298 -- # mlx=() 00:14:26.094 00:48:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:26.094 00:48:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.094 00:48:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:26.094 00:48:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:26.094 00:48:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:26.094 00:48:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.094 00:48:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:26.094 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:26.094 00:48:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:26.094 00:48:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:26.094 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:26.094 00:48:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:26.094 00:48:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.094 00:48:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.094 00:48:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:26.094 00:48:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.094 00:48:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:26.094 Found net devices under 0000:86:00.0: cvl_0_0 00:14:26.094 00:48:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.094 00:48:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:26.094 00:48:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.094 00:48:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:26.094 00:48:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.094 00:48:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:26.094 Found net devices under 0000:86:00.1: cvl_0_1 00:14:26.094 00:48:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.094 00:48:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:26.094 00:48:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:26.094 00:48:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:26.094 00:48:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:26.094 00:48:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.094 00:48:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.094 00:48:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.094 00:48:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:26.094 00:48:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.094 00:48:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.094 00:48:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:26.094 00:48:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.094 00:48:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.094 00:48:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:26.094 00:48:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:26.094 00:48:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.094 00:48:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.094 00:48:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.094 00:48:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.094 00:48:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:26.094 00:48:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.353 00:48:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.353 00:48:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.353 00:48:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:26.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:14:26.353 00:14:26.353 --- 10.0.0.2 ping statistics --- 00:14:26.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.353 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:14:26.353 00:48:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:14:26.353 00:14:26.353 --- 10.0.0.1 ping statistics --- 00:14:26.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.353 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:14:26.353 00:48:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.353 00:48:18 -- nvmf/common.sh@411 -- # return 0 00:14:26.353 00:48:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:26.353 00:48:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.353 00:48:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:26.353 00:48:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:26.353 00:48:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.353 00:48:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:26.353 00:48:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:26.353 00:48:18 -- target/multipath.sh@45 -- # '[' -z ']' 00:14:26.353 00:48:18 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:26.353 only one NIC for nvmf test 00:14:26.353 00:48:18 -- target/multipath.sh@47 -- # nvmftestfini 00:14:26.353 00:48:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:26.353 00:48:18 -- nvmf/common.sh@117 -- # sync 00:14:26.353 00:48:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.353 00:48:18 -- nvmf/common.sh@120 -- # set +e 00:14:26.353 00:48:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.353 00:48:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.353 rmmod nvme_tcp 00:14:26.353 rmmod nvme_fabrics 00:14:26.353 rmmod nvme_keyring 00:14:26.353 00:48:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.353 00:48:18 -- nvmf/common.sh@124 -- # set -e 00:14:26.353 00:48:18 -- nvmf/common.sh@125 -- # return 0 00:14:26.353 00:48:18 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:14:26.353 00:48:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:26.353 00:48:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:26.353 00:48:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:26.353 00:48:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.353 00:48:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.353 00:48:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.353 00:48:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.353 00:48:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.888 00:48:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:28.888 00:48:21 -- target/multipath.sh@48 -- # exit 0 00:14:28.888 00:48:21 -- target/multipath.sh@1 -- # nvmftestfini 00:14:28.888 00:48:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:28.888 00:48:21 -- nvmf/common.sh@117 -- # sync 00:14:28.888 00:48:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.888 00:48:21 -- nvmf/common.sh@120 -- # set +e 00:14:28.888 00:48:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.888 00:48:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.888 00:48:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.889 00:48:21 -- nvmf/common.sh@124 -- # set -e 00:14:28.889 00:48:21 -- nvmf/common.sh@125 -- # return 0 00:14:28.889 00:48:21 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:14:28.889 00:48:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:28.889 00:48:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:28.889 00:48:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:28.889 00:48:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.889 00:48:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.889 00:48:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.889 00:48:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.889 00:48:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.889 00:48:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:28.889 00:14:28.889 real 0m7.953s 00:14:28.889 user 0m1.560s 00:14:28.889 sys 0m4.377s 00:14:28.889 00:48:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:28.889 00:48:21 -- common/autotest_common.sh@10 -- # set +x 00:14:28.889 ************************************ 00:14:28.889 END TEST nvmf_multipath 00:14:28.889 ************************************ 00:14:28.889 00:48:21 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:28.889 00:48:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:28.889 00:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:28.889 00:48:21 -- common/autotest_common.sh@10 -- # set +x 00:14:28.889 ************************************ 00:14:28.889 START TEST nvmf_zcopy 00:14:28.889 ************************************ 00:14:28.889 00:48:21 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:28.889 * Looking for test storage... 00:14:28.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.889 00:48:21 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.889 00:48:21 -- nvmf/common.sh@7 -- # uname -s 00:14:28.889 00:48:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.889 00:48:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.889 00:48:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.889 00:48:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.889 00:48:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.889 00:48:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.889 00:48:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.889 00:48:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.889 00:48:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.889 00:48:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.889 00:48:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:28.889 00:48:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:28.889 00:48:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.889 00:48:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.889 00:48:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.889 00:48:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.889 00:48:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.889 00:48:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.889 00:48:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.889 00:48:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.889 00:48:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.889 00:48:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.889 00:48:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.889 00:48:21 -- paths/export.sh@5 -- # export PATH 00:14:28.889 00:48:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.889 00:48:21 -- nvmf/common.sh@47 -- # : 0 00:14:28.889 00:48:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:28.889 00:48:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:28.889 00:48:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.889 00:48:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.889 00:48:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.889 00:48:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:28.889 00:48:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:28.889 00:48:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:28.889 00:48:21 -- target/zcopy.sh@12 -- # nvmftestinit 00:14:28.889 00:48:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:28.889 00:48:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.889 00:48:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:28.889 00:48:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:28.889 00:48:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:28.889 00:48:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.889 00:48:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.889 00:48:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.889 00:48:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:28.889 00:48:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:28.889 00:48:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:28.889 00:48:21 -- common/autotest_common.sh@10 -- # set +x 00:14:34.268 00:48:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:34.268 00:48:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:34.268 00:48:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:34.268 00:48:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:34.268 00:48:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:34.268 00:48:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:34.268 00:48:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:34.268 00:48:26 -- nvmf/common.sh@295 -- # net_devs=() 00:14:34.268 00:48:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:34.268 00:48:26 -- nvmf/common.sh@296 -- # e810=() 00:14:34.268 00:48:26 -- nvmf/common.sh@296 -- # local -ga e810 00:14:34.268 00:48:26 -- nvmf/common.sh@297 -- # x722=() 00:14:34.268 00:48:26 -- nvmf/common.sh@297 -- # local -ga x722 00:14:34.268 00:48:26 -- nvmf/common.sh@298 -- # mlx=() 00:14:34.268 00:48:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:34.268 00:48:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:34.268 00:48:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:34.268 00:48:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:34.268 00:48:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:34.268 00:48:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.268 00:48:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:34.268 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:34.268 00:48:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:34.268 00:48:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:34.268 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:34.268 00:48:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:34.268 00:48:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:34.268 00:48:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.268 00:48:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.268 00:48:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:34.268 00:48:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.268 00:48:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:34.268 Found net devices under 0000:86:00.0: cvl_0_0 00:14:34.268 00:48:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.268 00:48:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:34.268 00:48:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:34.268 00:48:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:34.268 00:48:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:34.268 00:48:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:34.268 Found net devices under 0000:86:00.1: cvl_0_1 00:14:34.268 00:48:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:34.268 00:48:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:34.268 00:48:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:34.268 00:48:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:34.269 00:48:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:34.269 00:48:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:34.269 00:48:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.269 00:48:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.269 00:48:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:34.269 00:48:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:34.269 00:48:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:34.269 00:48:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:34.269 00:48:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:34.269 00:48:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:34.269 00:48:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.269 00:48:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:34.269 00:48:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:34.269 00:48:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:34.269 00:48:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:34.269 00:48:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:34.269 00:48:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:34.269 00:48:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:34.269 00:48:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:34.269 00:48:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:34.269 00:48:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:34.269 00:48:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:34.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:14:34.269 00:14:34.269 --- 10.0.0.2 ping statistics --- 00:14:34.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.269 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:14:34.269 00:48:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:34.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.435 ms 00:14:34.269 00:14:34.269 --- 10.0.0.1 ping statistics --- 00:14:34.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.269 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:14:34.269 00:48:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.269 00:48:26 -- nvmf/common.sh@411 -- # return 0 00:14:34.269 00:48:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:34.269 00:48:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.269 00:48:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:34.269 00:48:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:34.269 00:48:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.269 00:48:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:34.269 00:48:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:34.269 00:48:26 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:34.269 00:48:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:34.269 00:48:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:34.269 00:48:26 -- common/autotest_common.sh@10 -- # set +x 00:14:34.269 00:48:26 -- nvmf/common.sh@470 -- # nvmfpid=1663197 00:14:34.269 00:48:26 -- nvmf/common.sh@471 -- # waitforlisten 1663197 00:14:34.269 00:48:26 -- common/autotest_common.sh@817 -- # '[' -z 1663197 ']' 00:14:34.269 00:48:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.269 00:48:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:34.269 00:48:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.269 00:48:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:34.269 00:48:26 -- common/autotest_common.sh@10 -- # set +x 00:14:34.269 00:48:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:34.269 [2024-04-27 00:48:26.375784] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:14:34.269 [2024-04-27 00:48:26.375828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.269 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.269 [2024-04-27 00:48:26.431479] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.269 [2024-04-27 00:48:26.508263] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.269 [2024-04-27 00:48:26.508299] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.269 [2024-04-27 00:48:26.508307] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.269 [2024-04-27 00:48:26.508313] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.269 [2024-04-27 00:48:26.508318] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.269 [2024-04-27 00:48:26.508338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.528 00:48:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:34.528 00:48:27 -- common/autotest_common.sh@850 -- # return 0 00:14:34.528 00:48:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:34.528 00:48:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:34.528 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:14:34.528 00:48:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.528 00:48:27 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:34.528 00:48:27 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:34.528 00:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.528 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:14:34.528 [2024-04-27 00:48:27.207199] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.528 00:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.528 00:48:27 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:34.528 00:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.528 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:14:34.528 00:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.528 00:48:27 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.528 00:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.528 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:14:34.528 [2024-04-27 00:48:27.223324] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.788 00:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.788 00:48:27 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.788 00:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.788 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:14:34.788 00:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.788 00:48:27 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:34.788 00:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.788 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:14:34.788 malloc0 00:14:34.788 00:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.788 00:48:27 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:34.788 00:48:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:34.788 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:14:34.788 00:48:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:34.788 00:48:27 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:34.788 00:48:27 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:34.788 00:48:27 -- nvmf/common.sh@521 -- # config=() 00:14:34.788 00:48:27 -- nvmf/common.sh@521 -- # local subsystem config 00:14:34.788 00:48:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:34.788 00:48:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:34.788 { 00:14:34.788 "params": { 00:14:34.788 "name": "Nvme$subsystem", 00:14:34.788 "trtype": "$TEST_TRANSPORT", 00:14:34.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:34.788 "adrfam": "ipv4", 00:14:34.788 "trsvcid": "$NVMF_PORT", 00:14:34.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:34.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:34.788 "hdgst": ${hdgst:-false}, 00:14:34.788 "ddgst": ${ddgst:-false} 00:14:34.788 }, 00:14:34.788 "method": "bdev_nvme_attach_controller" 00:14:34.788 } 00:14:34.788 EOF 00:14:34.788 )") 00:14:34.788 00:48:27 -- nvmf/common.sh@543 -- # cat 00:14:34.788 00:48:27 -- nvmf/common.sh@545 -- # jq . 00:14:34.788 00:48:27 -- nvmf/common.sh@546 -- # IFS=, 00:14:34.788 00:48:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:34.788 "params": { 00:14:34.788 "name": "Nvme1", 00:14:34.788 "trtype": "tcp", 00:14:34.788 "traddr": "10.0.0.2", 00:14:34.788 "adrfam": "ipv4", 00:14:34.788 "trsvcid": "4420", 00:14:34.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:34.788 "hdgst": false, 00:14:34.788 "ddgst": false 00:14:34.788 }, 00:14:34.788 "method": "bdev_nvme_attach_controller" 00:14:34.788 }' 00:14:34.788 [2024-04-27 00:48:27.300551] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:14:34.788 [2024-04-27 00:48:27.300592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1663443 ] 00:14:34.788 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.788 [2024-04-27 00:48:27.353400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.788 [2024-04-27 00:48:27.423819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.049 Running I/O for 10 seconds... 00:14:47.256 00:14:47.256 Latency(us) 00:14:47.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.256 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:47.256 Verification LBA range: start 0x0 length 0x1000 00:14:47.256 Nvme1n1 : 10.01 8513.11 66.51 0.00 0.00 14993.24 2008.82 35788.35 00:14:47.256 =================================================================================================================== 00:14:47.256 Total : 8513.11 66.51 0.00 0.00 14993.24 2008.82 35788.35 00:14:47.256 00:48:37 -- target/zcopy.sh@39 -- # perfpid=1665207 00:14:47.256 00:48:37 -- target/zcopy.sh@41 -- # xtrace_disable 00:14:47.256 00:48:37 -- common/autotest_common.sh@10 -- # set +x 00:14:47.256 00:48:37 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:47.256 00:48:37 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:47.256 00:48:37 -- nvmf/common.sh@521 -- # config=() 00:14:47.256 00:48:37 -- nvmf/common.sh@521 -- # local subsystem config 00:14:47.256 00:48:37 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:14:47.256 00:48:37 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:14:47.256 { 00:14:47.256 "params": { 00:14:47.256 "name": "Nvme$subsystem", 00:14:47.256 "trtype": "$TEST_TRANSPORT", 00:14:47.256 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:47.256 "adrfam": "ipv4", 00:14:47.256 "trsvcid": "$NVMF_PORT", 00:14:47.256 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:47.256 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:47.256 "hdgst": ${hdgst:-false}, 00:14:47.256 "ddgst": ${ddgst:-false} 00:14:47.256 }, 00:14:47.256 "method": "bdev_nvme_attach_controller" 00:14:47.256 } 00:14:47.256 EOF 00:14:47.256 )") 00:14:47.256 00:48:37 -- nvmf/common.sh@543 -- # cat 00:14:47.256 [2024-04-27 00:48:37.991218] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:37.991252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 00:48:37 -- nvmf/common.sh@545 -- # jq . 00:14:47.256 00:48:37 -- nvmf/common.sh@546 -- # IFS=, 00:14:47.256 00:48:37 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:14:47.256 "params": { 00:14:47.256 "name": "Nvme1", 00:14:47.256 "trtype": "tcp", 00:14:47.256 "traddr": "10.0.0.2", 00:14:47.256 "adrfam": "ipv4", 00:14:47.256 "trsvcid": "4420", 00:14:47.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:47.256 "hdgst": false, 00:14:47.256 "ddgst": false 00:14:47.256 }, 00:14:47.256 "method": "bdev_nvme_attach_controller" 00:14:47.256 }' 00:14:47.256 [2024-04-27 00:48:37.999205] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:37.999219] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.007224] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.007234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.015245] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.015256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.015617] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:14:47.256 [2024-04-27 00:48:38.015662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1665207 ] 00:14:47.256 [2024-04-27 00:48:38.023267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.023278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.031286] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.031296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.256 [2024-04-27 00:48:38.039309] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.039319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.047329] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.047340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.055351] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.055360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.063372] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.063382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.069568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.256 [2024-04-27 00:48:38.071394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.071405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.079418] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.079431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.087437] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.087448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.095464] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.095473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.103485] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.103495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.111510] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.111536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.119530] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.119542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.127550] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.127560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.135570] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.135581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.256 [2024-04-27 00:48:38.142183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.256 [2024-04-27 00:48:38.143593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.256 [2024-04-27 00:48:38.143604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.151619] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.151634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.159643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.159662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.167662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.167674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.175683] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.175695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.183704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.183717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.191723] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.191734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.199747] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.199760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.207769] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.207780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.215790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.215801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.223811] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.223821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.231853] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.231872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.239865] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.239881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.247883] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.247895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.255903] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.255917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.263923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.263933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.271941] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.271951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.279962] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.279971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.287986] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.287996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.296009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.296020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.304038] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.304051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.312056] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.312068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.320080] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.320091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.328102] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.328112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.336123] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.336132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.344141] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.344151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.352164] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.352175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.360188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.360202] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.368207] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.368217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.376242] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.376259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 Running I/O for 5 seconds... 00:14:47.257 [2024-04-27 00:48:38.384254] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.384264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.405997] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.406018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.419516] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.419535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.428530] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.428553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.436098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.436117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.443808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.443826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.453368] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.453387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.461294] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.461312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.472247] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.472267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.479247] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.479265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.488182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.488201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.495478] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.495497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.504571] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.504589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.512470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.512488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.520281] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.520299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.527622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.527640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.538832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.538851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.545790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.545809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.556796] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.556815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.565437] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.565456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.574092] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.574110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.584979] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.584997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.594961] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.594983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.605326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.605345] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.257 [2024-04-27 00:48:38.616034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.257 [2024-04-27 00:48:38.616052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.625197] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.625216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.632048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.632067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.642150] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.642169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.651352] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.651371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.658100] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.658119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.669091] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.669111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.676823] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.676842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.687405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.687424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.694420] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.694438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.705917] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.705936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.714856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.714875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.727157] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.727177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.736715] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.736734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.747009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.747027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.759588] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.759606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.770990] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.771010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.779674] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.779697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.789036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.789056] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.797643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.797662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.806129] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.806148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.814811] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.814830] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.823839] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.823857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.832413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.832432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.841314] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.841333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.850900] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.850919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.862056] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.862078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.870217] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.870236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.882203] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.882221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.894261] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.894279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.901719] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.901738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.910998] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.911017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.919880] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.919898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.926580] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.926598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.942022] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.942042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.949976] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.949995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.957904] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.957927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.968629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.968648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.976663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.976682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.986411] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.986430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:38.994570] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:38.994589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.003807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.003825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.011423] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.011442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.019941] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.019962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.028298] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.028317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.035339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.035358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.046084] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.046103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.053539] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.053557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.061188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.061207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.070636] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.070653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.079204] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.079223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.088077] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.088096] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.096888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.096907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.106121] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.106140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.115102] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.115121] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.258 [2024-04-27 00:48:39.123961] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.258 [2024-04-27 00:48:39.123983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.133451] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.133471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.142006] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.142025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.148957] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.148975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.159355] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.159374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.168377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.168397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.177152] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.177171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.185718] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.185737] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.194830] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.194849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.203773] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.203792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.213661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.213680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.222675] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.222694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.231527] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.231546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.240549] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.240568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.250343] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.250363] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.259490] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.259509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.268763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.268782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.277654] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.277673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.286680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.286699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.295339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.295358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.304342] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.304361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.312750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.312769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.321926] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.321945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.331193] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.331211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.339373] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.339401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.348433] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.348451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.357350] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.357368] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.366461] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.366479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.375378] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.375396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.384181] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.384200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.393064] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.393088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.401675] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.401694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.410093] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.410111] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.418528] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.418547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.427795] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.427814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.436272] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.436292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.445368] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.445386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.454110] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.454130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.462916] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.462935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.471896] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.471915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.480945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.480963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.489377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.489395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.498574] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.498592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.506974] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.506992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.515744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.515763] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.524313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.524332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.533152] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.533171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.541999] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.542018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.551145] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.551164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.560276] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.560295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.569681] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.569700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.578233] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.578251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.587341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.587360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.596506] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.596525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.605744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.259 [2024-04-27 00:48:39.605762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.259 [2024-04-27 00:48:39.614465] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.614484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.623358] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.623376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.632140] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.632159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.641257] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.641276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.650278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.650296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.659141] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.659159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.667594] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.667613] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.675241] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.675259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.684275] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.684293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.692978] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.692996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.702057] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.702081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.711240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.711258] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.720384] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.720402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.731282] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.731300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.741649] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.741666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.749357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.749375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.760898] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.760916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.771844] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.771863] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.780637] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.780655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.789489] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.789507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.798382] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.798401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.806233] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.806252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.814126] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.814143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.821551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.821568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.831883] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.831902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.840463] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.840481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.849199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.849218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.859954] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.859971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.870774] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.870793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.879668] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.879687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.888239] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.888257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.897143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.897161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.905569] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.905587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.914090] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.914108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.922996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.923013] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.933906] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.933925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.260 [2024-04-27 00:48:39.944178] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.260 [2024-04-27 00:48:39.944197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:39.951573] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:39.951592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:39.962842] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:39.962862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:39.970431] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:39.970454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:39.977853] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:39.977871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:39.985171] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:39.985199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:39.995717] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:39.995736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.002610] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.002630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.014022] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.014041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.021180] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.021225] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.032124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.032143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.041128] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.041147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.049917] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.049935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.059139] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.059159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.066841] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.066861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.077957] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.077976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.086927] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.086946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.095816] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.095835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.528 [2024-04-27 00:48:40.103749] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.528 [2024-04-27 00:48:40.103769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.113716] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.113735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.120670] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.120689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.138377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.138396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.147476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.147502] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.154616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.154635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.165015] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.165034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.172955] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.172974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.181087] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.181106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.189360] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.189380] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.196894] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.196914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.204254] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.204273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.529 [2024-04-27 00:48:40.215105] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.529 [2024-04-27 00:48:40.215125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.224463] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.224483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.236147] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.236166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.246012] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.246031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.258170] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.258188] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.266395] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.266413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.276794] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.276812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.285555] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.285573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.295312] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.295331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.306074] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.306093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.316378] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.316397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.325629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.325652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.333323] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.333342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.341375] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.341393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.351534] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.351553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.361467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.361485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.370142] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.370160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.378730] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.378749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.387792] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.387811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.396705] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.396724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.407536] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.407554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.416969] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.416988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.426450] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.426469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.433226] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.433245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.444330] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.444348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.453028] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.453046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.461926] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.461945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.468761] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.468779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.787 [2024-04-27 00:48:40.480286] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.787 [2024-04-27 00:48:40.480307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.488940] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.488960] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.498316] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.498341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.506850] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.506870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.515712] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.515732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.524516] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.524536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.532995] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.533015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.542055] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.542080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.550741] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.550761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.559794] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.559813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.568427] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.568446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.577854] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.577872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.587330] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.587359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.596582] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.596600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.605687] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.605706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.614307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.614326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.622909] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.622927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.631776] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.631794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.640094] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.640113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.649123] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.649143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.655825] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.655844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.666690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.666709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.675124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.675143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.684435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.684455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.693411] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.693430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.701385] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.701404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.711530] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.711550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.720776] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.720795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.729257] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.729276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.046 [2024-04-27 00:48:40.738024] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.046 [2024-04-27 00:48:40.738050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.747098] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.747118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.756387] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.756406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.764739] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.764758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.773467] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.773486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.782304] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.782324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.791093] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.791112] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.799861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.799880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.808672] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.808689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.817952] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.817970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.827143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.827161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.835497] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.835516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.844447] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.844466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.851665] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.305 [2024-04-27 00:48:40.851683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.305 [2024-04-27 00:48:40.862649] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.862668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.871053] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.871077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.879456] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.879475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.887870] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.887889] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.896394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.896413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.904771] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.904789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.913728] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.913747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.922779] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.922797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.931507] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.931526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.940530] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.940548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.949233] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.949251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.957970] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.957989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.966723] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.966742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.975233] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.975252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.984611] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.984630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.306 [2024-04-27 00:48:40.993719] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.306 [2024-04-27 00:48:40.993738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.002483] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.002510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.011607] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.011627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.021403] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.021424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.030115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.030134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.039145] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.039163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.048468] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.048487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.057476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.057494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.066564] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.066583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.075737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.075755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.084264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.084282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.092960] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.092979] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.102017] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.102036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.110623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.110641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.119429] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.119447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.127474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.127492] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.138138] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.138156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.146623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.146641] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.155398] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.155418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.164330] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.164349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.173301] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.173319] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.181746] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.181764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.190288] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.190306] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.565 [2024-04-27 00:48:41.198716] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.565 [2024-04-27 00:48:41.198734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.566 [2024-04-27 00:48:41.207473] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.566 [2024-04-27 00:48:41.207492] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.566 [2024-04-27 00:48:41.215585] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.566 [2024-04-27 00:48:41.215603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.566 [2024-04-27 00:48:41.224779] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.566 [2024-04-27 00:48:41.224797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.566 [2024-04-27 00:48:41.233448] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.566 [2024-04-27 00:48:41.233466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.566 [2024-04-27 00:48:41.242123] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.566 [2024-04-27 00:48:41.242140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.566 [2024-04-27 00:48:41.250790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.566 [2024-04-27 00:48:41.250807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.566 [2024-04-27 00:48:41.259509] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.566 [2024-04-27 00:48:41.259527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.268970] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.268990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.278224] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.278242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.286764] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.286782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.296270] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.296288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.305518] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.305536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.314163] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.314181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.321151] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.321168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.331603] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.331625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.340272] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.340290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.349403] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.349421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.358500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.358517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.368182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.368200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.377087] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.377105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.386257] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.386275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.394955] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.394973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.403507] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.403525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.411835] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.411852] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.418750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.418768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.429976] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.429995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.441035] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.441052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.451111] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.451129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.459966] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.459984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.472799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.472817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.482264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.482282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.489326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.489344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.498597] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.498616] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.507134] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.825 [2024-04-27 00:48:41.507156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.825 [2024-04-27 00:48:41.516494] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.826 [2024-04-27 00:48:41.516514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.085 [2024-04-27 00:48:41.523635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.085 [2024-04-27 00:48:41.523655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.085 [2024-04-27 00:48:41.534264] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.085 [2024-04-27 00:48:41.534283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.085 [2024-04-27 00:48:41.541076] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.085 [2024-04-27 00:48:41.541094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.085 [2024-04-27 00:48:41.552353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.085 [2024-04-27 00:48:41.552372] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.085 [2024-04-27 00:48:41.559370] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.085 [2024-04-27 00:48:41.559389] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.085 [2024-04-27 00:48:41.569643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.085 [2024-04-27 00:48:41.569661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.085 [2024-04-27 00:48:41.578234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.085 [2024-04-27 00:48:41.578252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.085 [2024-04-27 00:48:41.586737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.085 [2024-04-27 00:48:41.586756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.085 [2024-04-27 00:48:41.595404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.595423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.603486] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.603504] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.610805] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.610823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.620744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.620764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.629234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.629253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.637807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.637825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.646338] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.646357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.655809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.655828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.664773] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.664792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.675832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.675855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.685234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.685253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.694097] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.694115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.701714] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.701733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.712044] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.712062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.721538] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.721556] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.731886] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.731904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.738912] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.738930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.749615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.749633] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.756583] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.756601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.767229] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.767248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.086 [2024-04-27 00:48:41.773936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.086 [2024-04-27 00:48:41.773955] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.784709] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.784729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.795479] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.795498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.804417] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.804436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.814161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.814180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.820966] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.820984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.832162] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.832181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.839172] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.839190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.849204] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.849227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.856243] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.856263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.866716] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.866734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.875372] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.875391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.884312] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.884331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.893240] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.893259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.901866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.901886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.910873] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.910892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.920285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.920303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.928763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.928781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.937824] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.937842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.946870] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.946888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.955343] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.955362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.962089] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.962107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.973196] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.973215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.982293] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.982312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.991081] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.991100] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.345 [2024-04-27 00:48:41.999530] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.345 [2024-04-27 00:48:41.999549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.346 [2024-04-27 00:48:42.008612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.346 [2024-04-27 00:48:42.008631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.346 [2024-04-27 00:48:42.017767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.346 [2024-04-27 00:48:42.017786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.346 [2024-04-27 00:48:42.026702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.346 [2024-04-27 00:48:42.026721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.346 [2024-04-27 00:48:42.035441] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.346 [2024-04-27 00:48:42.035461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.044158] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.044186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.061600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.061620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.070500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.070519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.079616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.079635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.088745] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.088764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.097730] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.097749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.106759] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.106778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.115575] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.115594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.124960] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.124979] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.133733] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.133752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.142234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.142252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.151368] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.151387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.160268] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.160287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.168781] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.168800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.177848] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.177867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.186282] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.186301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.193357] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.193376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.203570] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.203588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.212538] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.212560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.221386] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.221405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.230394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.230413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.239279] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.239298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.246262] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.246281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.256254] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.256273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.265289] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.265307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.273888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.273907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.284877] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.284895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.605 [2024-04-27 00:48:42.294502] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.605 [2024-04-27 00:48:42.294519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.864 [2024-04-27 00:48:42.304234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.864 [2024-04-27 00:48:42.304253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.864 [2024-04-27 00:48:42.311246] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.864 [2024-04-27 00:48:42.311264] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.864 [2024-04-27 00:48:42.321463] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.864 [2024-04-27 00:48:42.321481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.864 [2024-04-27 00:48:42.330003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.864 [2024-04-27 00:48:42.330021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.864 [2024-04-27 00:48:42.338809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.864 [2024-04-27 00:48:42.338827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.864 [2024-04-27 00:48:42.347522] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.347541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.356313] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.356332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.367823] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.367841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.378940] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.378959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.387813] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.387831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.397295] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.397314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.404067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.404093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.415629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.415651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.423871] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.423890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.432927] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.432945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.442399] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.442418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.451156] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.451174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.459879] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.459897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.468760] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.468777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.477941] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.477961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.486590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.486608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.496048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.496066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.505263] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.505281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.514046] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.514064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.522622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.522640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.531644] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.531663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.540368] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.540387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.548958] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.548975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.865 [2024-04-27 00:48:42.557801] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.865 [2024-04-27 00:48:42.557820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.566976] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.566996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.575591] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.575609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.583968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.583987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.592828] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.592847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.601992] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.602012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.610477] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.610496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.619100] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.619119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.628296] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.628315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.637253] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.637271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.645715] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.645733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.654933] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.654952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.663782] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.663800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.673309] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.673327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.681601] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.681620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.690679] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.690698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.699244] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.699270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.708079] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.708098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.716962] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.716980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.124 [2024-04-27 00:48:42.724945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.124 [2024-04-27 00:48:42.724963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.125 [2024-04-27 00:48:42.734626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.125 [2024-04-27 00:48:42.734645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.125 [2024-04-27 00:48:42.743268] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.125 [2024-04-27 00:48:42.743287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.125 [2024-04-27 00:48:42.751945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.125 [2024-04-27 00:48:42.751963] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.125 [2024-04-27 00:48:42.760727] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.125 [2024-04-27 00:48:42.760745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.125 [2024-04-27 00:48:42.769907] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.125 [2024-04-27 00:48:42.769925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.125 [2024-04-27 00:48:42.778661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.125 [2024-04-27 00:48:42.778680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.125 [2024-04-27 00:48:42.787497] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.125 [2024-04-27 00:48:42.787515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.125 [2024-04-27 00:48:42.796782] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.125 [2024-04-27 00:48:42.796800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.125 [2024-04-27 00:48:42.805470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.125 [2024-04-27 00:48:42.805488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.125 [2024-04-27 00:48:42.814400] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.125 [2024-04-27 00:48:42.814418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.823426] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.823446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.832404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.832422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.841669] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.841688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.850378] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.850397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.859176] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.859194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.868091] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.868113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.876913] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.876933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.885576] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.885595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.893407] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.893427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.903435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.903454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.912811] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.912829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.921829] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.921848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.931060] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.931085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.940143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.940162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.949016] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.949034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.957547] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.957565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.965604] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.965622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.975048] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.975067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.982585] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.982604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:42.990503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:42.990522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:43.000043] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:43.000062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:43.009278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:43.009297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:43.018563] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:43.018582] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:43.028390] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:43.028410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:43.037124] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:43.037146] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:43.043974] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:43.043991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:43.054563] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:43.054583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:43.063139] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:43.063157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.384 [2024-04-27 00:48:43.072470] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.384 [2024-04-27 00:48:43.072489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.081550] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.081570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.090388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.090407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.099655] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.099676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.109149] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.109168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.117615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.117633] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.126475] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.126493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.135341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.135359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.144275] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.144293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.154447] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.154465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.165426] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.165444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.175578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.175597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.183513] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.183531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.192767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.192786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.201515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.201533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.210221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.210242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.218684] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.218703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.226907] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.226925] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.237157] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.237176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.245582] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.245601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.254243] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.254262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.262906] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.262924] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.270455] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.270474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.280487] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.280506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.289495] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.289514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.298804] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.298823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.307590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.307610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.315903] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.315922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.324839] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.324858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.643 [2024-04-27 00:48:43.333726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.643 [2024-04-27 00:48:43.333750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.342483] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.342501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.351598] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.351617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.360377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.360396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.369332] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.369351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.377805] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.377824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.386199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.386217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.394559] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.394577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.401091] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.401108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 00:14:50.903 Latency(us) 00:14:50.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.903 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:50.903 Nvme1n1 : 5.00 15852.06 123.84 0.00 0.00 8068.45 2421.98 30317.52 00:14:50.903 =================================================================================================================== 00:14:50.903 Total : 15852.06 123.84 0.00 0.00 8068.45 2421.98 30317.52 00:14:50.903 [2024-04-27 00:48:43.409074] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.409089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.417094] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.417108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.425115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.425126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.433141] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.433162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.441162] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.441177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.449182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.449198] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.457199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.457211] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.465218] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.465231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.473243] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.473257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.481262] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.481275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.489284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.489296] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.497305] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.497316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.505326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.505339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.513347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.513357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.521372] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.521384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.529392] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.529402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.537412] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.537422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.545436] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.545447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.553455] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.553466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.561476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.561485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.569497] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.569507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.577518] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.577527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.585539] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.585549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.903 [2024-04-27 00:48:43.593569] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.903 [2024-04-27 00:48:43.593587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.162 [2024-04-27 00:48:43.601592] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.162 [2024-04-27 00:48:43.601608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.163 [2024-04-27 00:48:43.609609] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:51.163 [2024-04-27 00:48:43.609624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:51.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1665207) - No such process 00:14:51.163 00:48:43 -- target/zcopy.sh@49 -- # wait 1665207 00:14:51.163 00:48:43 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.163 00:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.163 00:48:43 -- common/autotest_common.sh@10 -- # set +x 00:14:51.163 00:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.163 00:48:43 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:51.163 00:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.163 00:48:43 -- common/autotest_common.sh@10 -- # set +x 00:14:51.163 delay0 00:14:51.163 00:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.163 00:48:43 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:51.163 00:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:51.163 00:48:43 -- common/autotest_common.sh@10 -- # set +x 00:14:51.163 00:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:51.163 00:48:43 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:51.163 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.163 [2024-04-27 00:48:43.729566] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:57.725 Initializing NVMe Controllers 00:14:57.725 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:57.725 Initialization complete. Launching workers. 00:14:57.725 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 98 00:14:57.725 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 373, failed to submit 45 00:14:57.726 success 188, unsuccess 185, failed 0 00:14:57.726 00:48:49 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:57.726 00:48:49 -- target/zcopy.sh@60 -- # nvmftestfini 00:14:57.726 00:48:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:57.726 00:48:49 -- nvmf/common.sh@117 -- # sync 00:14:57.726 00:48:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.726 00:48:49 -- nvmf/common.sh@120 -- # set +e 00:14:57.726 00:48:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.726 00:48:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.726 rmmod nvme_tcp 00:14:57.726 rmmod nvme_fabrics 00:14:57.726 rmmod nvme_keyring 00:14:57.726 00:48:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.726 00:48:49 -- nvmf/common.sh@124 -- # set -e 00:14:57.726 00:48:49 -- nvmf/common.sh@125 -- # return 0 00:14:57.726 00:48:49 -- nvmf/common.sh@478 -- # '[' -n 1663197 ']' 00:14:57.726 00:48:49 -- nvmf/common.sh@479 -- # killprocess 1663197 00:14:57.726 00:48:49 -- common/autotest_common.sh@936 -- # '[' -z 1663197 ']' 00:14:57.726 00:48:49 -- common/autotest_common.sh@940 -- # kill -0 1663197 00:14:57.726 00:48:49 -- common/autotest_common.sh@941 -- # uname 00:14:57.726 00:48:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.726 00:48:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1663197 00:14:57.726 00:48:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:57.726 00:48:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:57.726 00:48:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1663197' 00:14:57.726 killing process with pid 1663197 00:14:57.726 00:48:50 -- common/autotest_common.sh@955 -- # kill 1663197 00:14:57.726 00:48:50 -- common/autotest_common.sh@960 -- # wait 1663197 00:14:57.726 00:48:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:57.726 00:48:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:57.726 00:48:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:57.726 00:48:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.726 00:48:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.726 00:48:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.726 00:48:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.726 00:48:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.629 00:48:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:59.629 00:14:59.629 real 0m31.023s 00:14:59.629 user 0m42.336s 00:14:59.629 sys 0m9.880s 00:14:59.629 00:48:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:59.629 00:48:52 -- common/autotest_common.sh@10 -- # set +x 00:14:59.629 ************************************ 00:14:59.629 END TEST nvmf_zcopy 00:14:59.629 ************************************ 00:14:59.888 00:48:52 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:59.888 00:48:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:59.888 00:48:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.888 00:48:52 -- common/autotest_common.sh@10 -- # set +x 00:14:59.888 ************************************ 00:14:59.888 START TEST nvmf_nmic 00:14:59.888 ************************************ 00:14:59.888 00:48:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:59.888 * Looking for test storage... 00:14:59.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.888 00:48:52 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.888 00:48:52 -- nvmf/common.sh@7 -- # uname -s 00:14:59.888 00:48:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.888 00:48:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.888 00:48:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.888 00:48:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.888 00:48:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.888 00:48:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.888 00:48:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.888 00:48:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.888 00:48:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.888 00:48:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.888 00:48:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.888 00:48:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.888 00:48:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.888 00:48:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.888 00:48:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.888 00:48:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.888 00:48:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.888 00:48:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.888 00:48:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.888 00:48:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.888 00:48:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.888 00:48:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.888 00:48:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.888 00:48:52 -- paths/export.sh@5 -- # export PATH 00:14:59.889 00:48:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.889 00:48:52 -- nvmf/common.sh@47 -- # : 0 00:14:59.889 00:48:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.889 00:48:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.889 00:48:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.889 00:48:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.889 00:48:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.889 00:48:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.889 00:48:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.889 00:48:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.889 00:48:52 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.148 00:48:52 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.148 00:48:52 -- target/nmic.sh@14 -- # nvmftestinit 00:15:00.148 00:48:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:00.148 00:48:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.148 00:48:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:00.148 00:48:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:00.148 00:48:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:00.148 00:48:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.148 00:48:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.148 00:48:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.148 00:48:52 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:00.148 00:48:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:00.148 00:48:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:00.148 00:48:52 -- common/autotest_common.sh@10 -- # set +x 00:15:05.458 00:48:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:05.458 00:48:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:05.458 00:48:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:05.458 00:48:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:05.458 00:48:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:05.458 00:48:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:05.458 00:48:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:05.458 00:48:57 -- nvmf/common.sh@295 -- # net_devs=() 00:15:05.458 00:48:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:05.458 00:48:57 -- nvmf/common.sh@296 -- # e810=() 00:15:05.458 00:48:57 -- nvmf/common.sh@296 -- # local -ga e810 00:15:05.458 00:48:57 -- nvmf/common.sh@297 -- # x722=() 00:15:05.458 00:48:57 -- nvmf/common.sh@297 -- # local -ga x722 00:15:05.458 00:48:57 -- nvmf/common.sh@298 -- # mlx=() 00:15:05.458 00:48:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:05.458 00:48:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:05.458 00:48:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:05.458 00:48:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:05.458 00:48:57 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:05.459 00:48:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:05.459 00:48:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.459 00:48:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:05.459 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:05.459 00:48:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:05.459 00:48:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:05.459 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:05.459 00:48:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:05.459 00:48:57 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.459 00:48:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.459 00:48:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:05.459 00:48:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.459 00:48:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:05.459 Found net devices under 0000:86:00.0: cvl_0_0 00:15:05.459 00:48:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.459 00:48:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:05.459 00:48:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:05.459 00:48:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:05.459 00:48:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:05.459 00:48:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:05.459 Found net devices under 0000:86:00.1: cvl_0_1 00:15:05.459 00:48:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:05.459 00:48:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:05.459 00:48:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:05.459 00:48:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:05.459 00:48:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:05.459 00:48:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:05.459 00:48:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:05.459 00:48:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:05.459 00:48:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:05.459 00:48:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:05.459 00:48:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:05.459 00:48:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:05.459 00:48:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:05.459 00:48:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:05.459 00:48:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:05.459 00:48:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:05.459 00:48:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:05.459 00:48:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:05.459 00:48:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:05.459 00:48:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:05.459 00:48:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.459 00:48:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.459 00:48:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.459 00:48:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:05.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:15:05.459 00:15:05.459 --- 10.0.0.2 ping statistics --- 00:15:05.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.459 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:15:05.459 00:48:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:15:05.459 00:15:05.459 --- 10.0.0.1 ping statistics --- 00:15:05.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.459 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:15:05.459 00:48:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.459 00:48:57 -- nvmf/common.sh@411 -- # return 0 00:15:05.459 00:48:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:05.459 00:48:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.459 00:48:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:05.459 00:48:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.459 00:48:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:05.459 00:48:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:05.459 00:48:57 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:05.459 00:48:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:05.459 00:48:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:05.459 00:48:57 -- common/autotest_common.sh@10 -- # set +x 00:15:05.459 00:48:57 -- nvmf/common.sh@470 -- # nvmfpid=1670633 00:15:05.459 00:48:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:05.459 00:48:57 -- nvmf/common.sh@471 -- # waitforlisten 1670633 00:15:05.459 00:48:57 -- common/autotest_common.sh@817 -- # '[' -z 1670633 ']' 00:15:05.459 00:48:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.459 00:48:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:05.459 00:48:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.459 00:48:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:05.459 00:48:57 -- common/autotest_common.sh@10 -- # set +x 00:15:05.459 [2024-04-27 00:48:58.025323] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:05.459 [2024-04-27 00:48:58.025364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.459 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.459 [2024-04-27 00:48:58.082747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.719 [2024-04-27 00:48:58.160567] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.719 [2024-04-27 00:48:58.160605] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.719 [2024-04-27 00:48:58.160613] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.719 [2024-04-27 00:48:58.160618] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.719 [2024-04-27 00:48:58.160624] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.719 [2024-04-27 00:48:58.160661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.719 [2024-04-27 00:48:58.160757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.719 [2024-04-27 00:48:58.160835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.719 [2024-04-27 00:48:58.160837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.289 00:48:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:06.289 00:48:58 -- common/autotest_common.sh@850 -- # return 0 00:15:06.289 00:48:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:06.289 00:48:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:06.289 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 00:48:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.289 00:48:58 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:06.289 00:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.289 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 [2024-04-27 00:48:58.871855] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.289 00:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.289 00:48:58 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:06.289 00:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.289 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 Malloc0 00:15:06.289 00:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.289 00:48:58 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:06.289 00:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.289 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 00:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.289 00:48:58 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:06.289 00:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.289 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 00:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.289 00:48:58 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.289 00:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.289 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 [2024-04-27 00:48:58.923945] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.289 00:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.289 00:48:58 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:06.289 test case1: single bdev can't be used in multiple subsystems 00:15:06.289 00:48:58 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:06.289 00:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.289 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 00:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.289 00:48:58 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:06.289 00:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.289 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 00:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.289 00:48:58 -- target/nmic.sh@28 -- # nmic_status=0 00:15:06.289 00:48:58 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:06.289 00:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.289 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 [2024-04-27 00:48:58.947874] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:06.289 [2024-04-27 00:48:58.947895] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:06.289 [2024-04-27 00:48:58.947902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.289 request: 00:15:06.289 { 00:15:06.289 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:06.289 "namespace": { 00:15:06.289 "bdev_name": "Malloc0", 00:15:06.289 "no_auto_visible": false 00:15:06.289 }, 00:15:06.289 "method": "nvmf_subsystem_add_ns", 00:15:06.289 "req_id": 1 00:15:06.289 } 00:15:06.289 Got JSON-RPC error response 00:15:06.289 response: 00:15:06.289 { 00:15:06.289 "code": -32602, 00:15:06.289 "message": "Invalid parameters" 00:15:06.289 } 00:15:06.289 00:48:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:06.289 00:48:58 -- target/nmic.sh@29 -- # nmic_status=1 00:15:06.289 00:48:58 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:06.289 00:48:58 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:06.289 Adding namespace failed - expected result. 00:15:06.289 00:48:58 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:06.289 test case2: host connect to nvmf target in multiple paths 00:15:06.289 00:48:58 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:06.289 00:48:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.289 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.289 [2024-04-27 00:48:58.959982] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:06.289 00:48:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.289 00:48:58 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.667 00:49:00 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:08.605 00:49:01 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.605 00:49:01 -- common/autotest_common.sh@1184 -- # local i=0 00:15:08.605 00:49:01 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.605 00:49:01 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:08.605 00:49:01 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:11.142 00:49:03 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:11.142 00:49:03 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:11.142 00:49:03 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.142 00:49:03 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:11.142 00:49:03 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.142 00:49:03 -- common/autotest_common.sh@1194 -- # return 0 00:15:11.142 00:49:03 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:11.142 [global] 00:15:11.142 thread=1 00:15:11.142 invalidate=1 00:15:11.142 rw=write 00:15:11.142 time_based=1 00:15:11.142 runtime=1 00:15:11.142 ioengine=libaio 00:15:11.142 direct=1 00:15:11.142 bs=4096 00:15:11.142 iodepth=1 00:15:11.142 norandommap=0 00:15:11.142 numjobs=1 00:15:11.142 00:15:11.142 verify_dump=1 00:15:11.142 verify_backlog=512 00:15:11.142 verify_state_save=0 00:15:11.142 do_verify=1 00:15:11.142 verify=crc32c-intel 00:15:11.142 [job0] 00:15:11.142 filename=/dev/nvme0n1 00:15:11.142 Could not set queue depth (nvme0n1) 00:15:11.142 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.142 fio-3.35 00:15:11.142 Starting 1 thread 00:15:12.082 00:15:12.082 job0: (groupid=0, jobs=1): err= 0: pid=1671708: Sat Apr 27 00:49:04 2024 00:15:12.082 read: IOPS=1018, BW=4076KiB/s (4174kB/s)(4080KiB/1001msec) 00:15:12.082 slat (nsec): min=6354, max=51140, avg=16910.41, stdev=7596.42 00:15:12.082 clat (usec): min=298, max=1068, avg=634.60, stdev=108.63 00:15:12.082 lat (usec): min=305, max=1090, avg=651.51, stdev=112.74 00:15:12.082 clat percentiles (usec): 00:15:12.082 | 1.00th=[ 310], 5.00th=[ 445], 10.00th=[ 461], 20.00th=[ 570], 00:15:12.082 | 30.00th=[ 619], 40.00th=[ 644], 50.00th=[ 660], 60.00th=[ 676], 00:15:12.082 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 734], 95.00th=[ 750], 00:15:12.082 | 99.00th=[ 898], 99.50th=[ 947], 99.90th=[ 1020], 99.95th=[ 1074], 00:15:12.082 | 99.99th=[ 1074] 00:15:12.082 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:15:12.082 slat (usec): min=9, max=26619, avg=36.52, stdev=831.55 00:15:12.082 clat (usec): min=197, max=896, avg=280.96, stdev=113.47 00:15:12.082 lat (usec): min=206, max=27431, avg=317.48, stdev=855.71 00:15:12.082 clat percentiles (usec): 00:15:12.082 | 1.00th=[ 200], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:15:12.082 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 247], 00:15:12.082 | 70.00th=[ 285], 80.00th=[ 351], 90.00th=[ 453], 95.00th=[ 529], 00:15:12.082 | 99.00th=[ 668], 99.50th=[ 701], 99.90th=[ 816], 99.95th=[ 898], 00:15:12.082 | 99.99th=[ 898] 00:15:12.082 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:12.082 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:12.082 lat (usec) : 250=30.53%, 500=23.39%, 750=43.44%, 1000=2.50% 00:15:12.082 lat (msec) : 2=0.15% 00:15:12.082 cpu : usr=1.90%, sys=2.70%, ctx=2047, majf=0, minf=2 00:15:12.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:12.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.082 issued rwts: total=1020,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:12.082 00:15:12.082 Run status group 0 (all jobs): 00:15:12.082 READ: bw=4076KiB/s (4174kB/s), 4076KiB/s-4076KiB/s (4174kB/s-4174kB/s), io=4080KiB (4178kB), run=1001-1001msec 00:15:12.082 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:15:12.082 00:15:12.082 Disk stats (read/write): 00:15:12.082 nvme0n1: ios=855/1024, merge=0/0, ticks=1499/275, in_queue=1774, util=98.70% 00:15:12.082 00:49:04 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:12.343 00:49:04 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:12.343 00:49:04 -- common/autotest_common.sh@1205 -- # local i=0 00:15:12.343 00:49:04 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:12.343 00:49:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.343 00:49:04 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:12.343 00:49:04 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.343 00:49:04 -- common/autotest_common.sh@1217 -- # return 0 00:15:12.343 00:49:04 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:12.343 00:49:04 -- target/nmic.sh@53 -- # nvmftestfini 00:15:12.343 00:49:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:12.343 00:49:04 -- nvmf/common.sh@117 -- # sync 00:15:12.343 00:49:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:12.343 00:49:04 -- nvmf/common.sh@120 -- # set +e 00:15:12.343 00:49:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:12.343 00:49:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:12.343 rmmod nvme_tcp 00:15:12.343 rmmod nvme_fabrics 00:15:12.343 rmmod nvme_keyring 00:15:12.343 00:49:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:12.343 00:49:04 -- nvmf/common.sh@124 -- # set -e 00:15:12.343 00:49:04 -- nvmf/common.sh@125 -- # return 0 00:15:12.343 00:49:04 -- nvmf/common.sh@478 -- # '[' -n 1670633 ']' 00:15:12.343 00:49:04 -- nvmf/common.sh@479 -- # killprocess 1670633 00:15:12.343 00:49:04 -- common/autotest_common.sh@936 -- # '[' -z 1670633 ']' 00:15:12.343 00:49:04 -- common/autotest_common.sh@940 -- # kill -0 1670633 00:15:12.343 00:49:04 -- common/autotest_common.sh@941 -- # uname 00:15:12.343 00:49:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:12.343 00:49:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1670633 00:15:12.343 00:49:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:12.343 00:49:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:12.343 00:49:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1670633' 00:15:12.343 killing process with pid 1670633 00:15:12.343 00:49:04 -- common/autotest_common.sh@955 -- # kill 1670633 00:15:12.343 00:49:04 -- common/autotest_common.sh@960 -- # wait 1670633 00:15:12.603 00:49:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:12.603 00:49:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:12.603 00:49:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:12.603 00:49:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.603 00:49:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.603 00:49:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.603 00:49:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.603 00:49:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.213 00:49:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:15.213 00:15:15.213 real 0m14.831s 00:15:15.213 user 0m34.981s 00:15:15.213 sys 0m4.892s 00:15:15.213 00:49:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:15.213 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:15:15.213 ************************************ 00:15:15.213 END TEST nvmf_nmic 00:15:15.213 ************************************ 00:15:15.213 00:49:07 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:15.213 00:49:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:15.213 00:49:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:15.213 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:15:15.213 ************************************ 00:15:15.213 START TEST nvmf_fio_target 00:15:15.213 ************************************ 00:15:15.213 00:49:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:15.213 * Looking for test storage... 00:15:15.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:15.213 00:49:07 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.213 00:49:07 -- nvmf/common.sh@7 -- # uname -s 00:15:15.213 00:49:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.213 00:49:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.213 00:49:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.213 00:49:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.213 00:49:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.213 00:49:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.213 00:49:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.213 00:49:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.213 00:49:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.213 00:49:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.213 00:49:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.213 00:49:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:15.213 00:49:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.213 00:49:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.213 00:49:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.213 00:49:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.213 00:49:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.213 00:49:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.213 00:49:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.213 00:49:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.213 00:49:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.214 00:49:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.214 00:49:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.214 00:49:07 -- paths/export.sh@5 -- # export PATH 00:15:15.214 00:49:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.214 00:49:07 -- nvmf/common.sh@47 -- # : 0 00:15:15.214 00:49:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.214 00:49:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.214 00:49:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.214 00:49:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.214 00:49:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.214 00:49:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.214 00:49:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.214 00:49:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.214 00:49:07 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:15.214 00:49:07 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:15.214 00:49:07 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:15.214 00:49:07 -- target/fio.sh@16 -- # nvmftestinit 00:15:15.214 00:49:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:15.214 00:49:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:15.214 00:49:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:15.214 00:49:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:15.214 00:49:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:15.214 00:49:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.214 00:49:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.214 00:49:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:15.214 00:49:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:15.214 00:49:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:15.214 00:49:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:15.214 00:49:07 -- common/autotest_common.sh@10 -- # set +x 00:15:20.493 00:49:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:20.493 00:49:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:20.493 00:49:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:20.493 00:49:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:20.493 00:49:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:20.493 00:49:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:20.493 00:49:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:20.493 00:49:12 -- nvmf/common.sh@295 -- # net_devs=() 00:15:20.493 00:49:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:20.493 00:49:12 -- nvmf/common.sh@296 -- # e810=() 00:15:20.493 00:49:12 -- nvmf/common.sh@296 -- # local -ga e810 00:15:20.493 00:49:12 -- nvmf/common.sh@297 -- # x722=() 00:15:20.493 00:49:12 -- nvmf/common.sh@297 -- # local -ga x722 00:15:20.493 00:49:12 -- nvmf/common.sh@298 -- # mlx=() 00:15:20.493 00:49:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:20.493 00:49:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.493 00:49:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:20.493 00:49:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:20.493 00:49:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:20.493 00:49:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.493 00:49:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:20.493 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:20.493 00:49:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:20.493 00:49:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:20.493 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:20.493 00:49:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:20.493 00:49:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.493 00:49:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.493 00:49:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:20.493 00:49:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.493 00:49:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:20.493 Found net devices under 0000:86:00.0: cvl_0_0 00:15:20.493 00:49:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.493 00:49:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:20.493 00:49:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.493 00:49:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:20.493 00:49:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.493 00:49:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:20.493 Found net devices under 0000:86:00.1: cvl_0_1 00:15:20.493 00:49:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.493 00:49:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:20.493 00:49:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:20.493 00:49:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:20.493 00:49:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.493 00:49:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.493 00:49:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:20.493 00:49:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:20.493 00:49:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:20.493 00:49:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:20.493 00:49:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:20.493 00:49:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:20.493 00:49:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.493 00:49:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:20.493 00:49:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:20.493 00:49:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:20.493 00:49:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:20.493 00:49:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:20.493 00:49:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:20.493 00:49:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:20.493 00:49:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:20.493 00:49:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:20.493 00:49:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:20.493 00:49:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:20.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:15:20.493 00:15:20.493 --- 10.0.0.2 ping statistics --- 00:15:20.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.493 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:15:20.493 00:49:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:20.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:15:20.493 00:15:20.493 --- 10.0.0.1 ping statistics --- 00:15:20.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.493 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:15:20.493 00:49:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.493 00:49:12 -- nvmf/common.sh@411 -- # return 0 00:15:20.493 00:49:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:20.493 00:49:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.493 00:49:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:20.493 00:49:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.493 00:49:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:20.493 00:49:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:20.493 00:49:12 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:20.493 00:49:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:20.493 00:49:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:20.493 00:49:12 -- common/autotest_common.sh@10 -- # set +x 00:15:20.493 00:49:12 -- nvmf/common.sh@470 -- # nvmfpid=1675255 00:15:20.493 00:49:12 -- nvmf/common.sh@471 -- # waitforlisten 1675255 00:15:20.493 00:49:12 -- common/autotest_common.sh@817 -- # '[' -z 1675255 ']' 00:15:20.493 00:49:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.493 00:49:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:20.493 00:49:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.493 00:49:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:20.493 00:49:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:20.493 00:49:12 -- common/autotest_common.sh@10 -- # set +x 00:15:20.493 [2024-04-27 00:49:12.621357] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:20.493 [2024-04-27 00:49:12.621400] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.493 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.493 [2024-04-27 00:49:12.682780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.493 [2024-04-27 00:49:12.756987] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.493 [2024-04-27 00:49:12.757025] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.493 [2024-04-27 00:49:12.757032] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.493 [2024-04-27 00:49:12.757038] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.493 [2024-04-27 00:49:12.757043] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.493 [2024-04-27 00:49:12.757093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.493 [2024-04-27 00:49:12.757150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.493 [2024-04-27 00:49:12.757235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.494 [2024-04-27 00:49:12.757236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.751 00:49:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:20.751 00:49:13 -- common/autotest_common.sh@850 -- # return 0 00:15:20.752 00:49:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:20.752 00:49:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:20.752 00:49:13 -- common/autotest_common.sh@10 -- # set +x 00:15:21.009 00:49:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.009 00:49:13 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:21.009 [2024-04-27 00:49:13.609521] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.009 00:49:13 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.268 00:49:13 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:21.268 00:49:13 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.526 00:49:14 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:21.526 00:49:14 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.527 00:49:14 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:21.527 00:49:14 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.786 00:49:14 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:21.786 00:49:14 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:22.045 00:49:14 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.304 00:49:14 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:22.304 00:49:14 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.304 00:49:14 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:22.304 00:49:14 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.563 00:49:15 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:22.563 00:49:15 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:22.822 00:49:15 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:23.080 00:49:15 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:23.080 00:49:15 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.080 00:49:15 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:23.080 00:49:15 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:23.338 00:49:15 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.597 [2024-04-27 00:49:16.055244] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.597 00:49:16 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:23.597 00:49:16 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:23.856 00:49:16 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:25.232 00:49:17 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:25.232 00:49:17 -- common/autotest_common.sh@1184 -- # local i=0 00:15:25.232 00:49:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.232 00:49:17 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:15:25.232 00:49:17 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:15:25.232 00:49:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:27.137 00:49:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:27.137 00:49:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:27.137 00:49:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.137 00:49:19 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:15:27.137 00:49:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.137 00:49:19 -- common/autotest_common.sh@1194 -- # return 0 00:15:27.137 00:49:19 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:27.137 [global] 00:15:27.137 thread=1 00:15:27.137 invalidate=1 00:15:27.137 rw=write 00:15:27.137 time_based=1 00:15:27.137 runtime=1 00:15:27.137 ioengine=libaio 00:15:27.137 direct=1 00:15:27.137 bs=4096 00:15:27.137 iodepth=1 00:15:27.137 norandommap=0 00:15:27.137 numjobs=1 00:15:27.137 00:15:27.137 verify_dump=1 00:15:27.137 verify_backlog=512 00:15:27.137 verify_state_save=0 00:15:27.137 do_verify=1 00:15:27.137 verify=crc32c-intel 00:15:27.137 [job0] 00:15:27.137 filename=/dev/nvme0n1 00:15:27.137 [job1] 00:15:27.137 filename=/dev/nvme0n2 00:15:27.137 [job2] 00:15:27.137 filename=/dev/nvme0n3 00:15:27.137 [job3] 00:15:27.137 filename=/dev/nvme0n4 00:15:27.137 Could not set queue depth (nvme0n1) 00:15:27.137 Could not set queue depth (nvme0n2) 00:15:27.137 Could not set queue depth (nvme0n3) 00:15:27.137 Could not set queue depth (nvme0n4) 00:15:27.395 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.395 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.395 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.395 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.395 fio-3.35 00:15:27.395 Starting 4 threads 00:15:28.772 00:15:28.772 job0: (groupid=0, jobs=1): err= 0: pid=1676734: Sat Apr 27 00:49:21 2024 00:15:28.772 read: IOPS=19, BW=76.9KiB/s (78.8kB/s)(80.0KiB/1040msec) 00:15:28.772 slat (nsec): min=8744, max=22840, avg=21849.20, stdev=3091.44 00:15:28.772 clat (usec): min=41033, max=42969, avg=41948.85, stdev=328.92 00:15:28.772 lat (usec): min=41056, max=42991, avg=41970.70, stdev=329.69 00:15:28.772 clat percentiles (usec): 00:15:28.772 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:15:28.772 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:15:28.772 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:28.772 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:15:28.772 | 99.99th=[42730] 00:15:28.772 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:15:28.772 slat (usec): min=9, max=41454, avg=143.35, stdev=2158.56 00:15:28.772 clat (usec): min=191, max=674, avg=244.54, stdev=61.57 00:15:28.772 lat (usec): min=203, max=41903, avg=387.88, stdev=2177.17 00:15:28.772 clat percentiles (usec): 00:15:28.772 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:15:28.772 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:15:28.772 | 70.00th=[ 239], 80.00th=[ 265], 90.00th=[ 343], 95.00th=[ 363], 00:15:28.772 | 99.00th=[ 461], 99.50th=[ 469], 99.90th=[ 676], 99.95th=[ 676], 00:15:28.772 | 99.99th=[ 676] 00:15:28.772 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:28.772 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:28.772 lat (usec) : 250=72.18%, 500=23.87%, 750=0.19% 00:15:28.772 lat (msec) : 50=3.76% 00:15:28.772 cpu : usr=0.29%, sys=0.58%, ctx=535, majf=0, minf=1 00:15:28.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.772 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.772 job1: (groupid=0, jobs=1): err= 0: pid=1676756: Sat Apr 27 00:49:21 2024 00:15:28.772 read: IOPS=18, BW=75.7KiB/s (77.5kB/s)(76.0KiB/1004msec) 00:15:28.772 slat (nsec): min=8687, max=22689, avg=12049.16, stdev=4343.39 00:15:28.772 clat (usec): min=41376, max=42074, avg=41957.55, stdev=156.78 00:15:28.772 lat (usec): min=41388, max=42084, avg=41969.60, stdev=155.87 00:15:28.772 clat percentiles (usec): 00:15:28.772 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:15:28.772 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:15:28.772 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:28.772 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:28.772 | 99.99th=[42206] 00:15:28.772 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:15:28.772 slat (nsec): min=8833, max=42111, avg=11892.03, stdev=3023.10 00:15:28.772 clat (usec): min=201, max=1271, avg=387.94, stdev=128.28 00:15:28.772 lat (usec): min=211, max=1282, avg=399.83, stdev=128.44 00:15:28.772 clat percentiles (usec): 00:15:28.772 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 281], 00:15:28.772 | 30.00th=[ 326], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 388], 00:15:28.772 | 70.00th=[ 433], 80.00th=[ 465], 90.00th=[ 529], 95.00th=[ 644], 00:15:28.772 | 99.00th=[ 758], 99.50th=[ 930], 99.90th=[ 1270], 99.95th=[ 1270], 00:15:28.772 | 99.99th=[ 1270] 00:15:28.772 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:28.772 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:28.772 lat (usec) : 250=12.62%, 500=67.61%, 750=14.50%, 1000=1.32% 00:15:28.772 lat (msec) : 2=0.38%, 50=3.58% 00:15:28.772 cpu : usr=0.20%, sys=0.70%, ctx=532, majf=0, minf=1 00:15:28.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.772 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.772 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.772 job2: (groupid=0, jobs=1): err= 0: pid=1676787: Sat Apr 27 00:49:21 2024 00:15:28.772 read: IOPS=20, BW=81.7KiB/s (83.7kB/s)(84.0KiB/1028msec) 00:15:28.772 slat (nsec): min=8918, max=22649, avg=21508.71, stdev=2893.88 00:15:28.772 clat (usec): min=41092, max=42944, avg=41958.51, stdev=301.69 00:15:28.772 lat (usec): min=41114, max=42966, avg=41980.02, stdev=302.37 00:15:28.772 clat percentiles (usec): 00:15:28.772 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:15:28.772 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:15:28.772 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:28.773 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:15:28.773 | 99.99th=[42730] 00:15:28.773 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:15:28.773 slat (nsec): min=8986, max=41567, avg=10193.48, stdev=1691.70 00:15:28.773 clat (usec): min=209, max=704, avg=272.67, stdev=75.33 00:15:28.773 lat (usec): min=220, max=745, avg=282.87, stdev=75.86 00:15:28.773 clat percentiles (usec): 00:15:28.773 | 1.00th=[ 215], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 225], 00:15:28.773 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 260], 00:15:28.773 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 343], 95.00th=[ 445], 00:15:28.773 | 99.00th=[ 537], 99.50th=[ 545], 99.90th=[ 701], 99.95th=[ 701], 00:15:28.773 | 99.99th=[ 701] 00:15:28.773 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:28.773 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:28.773 lat (usec) : 250=52.91%, 500=38.84%, 750=4.32% 00:15:28.773 lat (msec) : 50=3.94% 00:15:28.773 cpu : usr=0.10%, sys=0.58%, ctx=533, majf=0, minf=1 00:15:28.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.773 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.773 job3: (groupid=0, jobs=1): err= 0: pid=1676797: Sat Apr 27 00:49:21 2024 00:15:28.773 read: IOPS=474, BW=1900KiB/s (1945kB/s)(1928KiB/1015msec) 00:15:28.773 slat (nsec): min=7520, max=44374, avg=8975.40, stdev=3152.14 00:15:28.773 clat (usec): min=310, max=45914, avg=1684.42, stdev=7019.27 00:15:28.773 lat (usec): min=318, max=45947, avg=1693.40, stdev=7021.49 00:15:28.773 clat percentiles (usec): 00:15:28.773 | 1.00th=[ 322], 5.00th=[ 343], 10.00th=[ 383], 20.00th=[ 461], 00:15:28.773 | 30.00th=[ 469], 40.00th=[ 478], 50.00th=[ 482], 60.00th=[ 490], 00:15:28.773 | 70.00th=[ 494], 80.00th=[ 502], 90.00th=[ 515], 95.00th=[ 586], 00:15:28.773 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:15:28.773 | 99.99th=[45876] 00:15:28.773 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:15:28.773 slat (usec): min=3, max=2323, avg=18.06, stdev=102.19 00:15:28.773 clat (usec): min=215, max=4057, avg=362.32, stdev=264.04 00:15:28.773 lat (usec): min=227, max=4075, avg=380.37, stdev=289.98 00:15:28.773 clat percentiles (usec): 00:15:28.773 | 1.00th=[ 225], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:15:28.773 | 30.00th=[ 251], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 330], 00:15:28.773 | 70.00th=[ 396], 80.00th=[ 486], 90.00th=[ 553], 95.00th=[ 578], 00:15:28.773 | 99.00th=[ 725], 99.50th=[ 775], 99.90th=[ 4047], 99.95th=[ 4047], 00:15:28.773 | 99.99th=[ 4047] 00:15:28.773 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:28.773 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:28.773 lat (usec) : 250=15.39%, 500=64.59%, 750=18.21%, 1000=0.10% 00:15:28.773 lat (msec) : 2=0.10%, 4=0.10%, 10=0.10%, 50=1.41% 00:15:28.773 cpu : usr=0.79%, sys=1.78%, ctx=999, majf=0, minf=2 00:15:28.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.773 issued rwts: total=482,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.773 00:15:28.773 Run status group 0 (all jobs): 00:15:28.773 READ: bw=2085KiB/s (2135kB/s), 75.7KiB/s-1900KiB/s (77.5kB/s-1945kB/s), io=2168KiB (2220kB), run=1004-1040msec 00:15:28.773 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2040KiB/s (2016kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1040msec 00:15:28.773 00:15:28.773 Disk stats (read/write): 00:15:28.773 nvme0n1: ios=36/512, merge=0/0, ticks=1430/119, in_queue=1549, util=86.77% 00:15:28.773 nvme0n2: ios=56/512, merge=0/0, ticks=789/189, in_queue=978, util=87.55% 00:15:28.773 nvme0n3: ios=72/512, merge=0/0, ticks=766/139, in_queue=905, util=91.97% 00:15:28.773 nvme0n4: ios=527/512, merge=0/0, ticks=745/180, in_queue=925, util=99.78% 00:15:28.773 00:49:21 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:28.773 [global] 00:15:28.773 thread=1 00:15:28.773 invalidate=1 00:15:28.773 rw=randwrite 00:15:28.773 time_based=1 00:15:28.773 runtime=1 00:15:28.773 ioengine=libaio 00:15:28.773 direct=1 00:15:28.773 bs=4096 00:15:28.773 iodepth=1 00:15:28.773 norandommap=0 00:15:28.773 numjobs=1 00:15:28.773 00:15:28.773 verify_dump=1 00:15:28.773 verify_backlog=512 00:15:28.773 verify_state_save=0 00:15:28.773 do_verify=1 00:15:28.773 verify=crc32c-intel 00:15:28.773 [job0] 00:15:28.773 filename=/dev/nvme0n1 00:15:28.773 [job1] 00:15:28.773 filename=/dev/nvme0n2 00:15:28.773 [job2] 00:15:28.773 filename=/dev/nvme0n3 00:15:28.773 [job3] 00:15:28.773 filename=/dev/nvme0n4 00:15:28.773 Could not set queue depth (nvme0n1) 00:15:28.773 Could not set queue depth (nvme0n2) 00:15:28.773 Could not set queue depth (nvme0n3) 00:15:28.773 Could not set queue depth (nvme0n4) 00:15:29.032 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.032 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.032 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.032 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:29.032 fio-3.35 00:15:29.032 Starting 4 threads 00:15:30.408 00:15:30.408 job0: (groupid=0, jobs=1): err= 0: pid=1677192: Sat Apr 27 00:49:22 2024 00:15:30.408 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:30.408 slat (nsec): min=3906, max=51376, avg=9939.05, stdev=6048.87 00:15:30.408 clat (usec): min=321, max=1050, avg=526.94, stdev=158.27 00:15:30.408 lat (usec): min=328, max=1073, avg=536.88, stdev=162.97 00:15:30.408 clat percentiles (usec): 00:15:30.408 | 1.00th=[ 347], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 424], 00:15:30.408 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 457], 60.00th=[ 465], 00:15:30.408 | 70.00th=[ 498], 80.00th=[ 644], 90.00th=[ 824], 95.00th=[ 889], 00:15:30.408 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1037], 99.95th=[ 1057], 00:15:30.408 | 99.99th=[ 1057] 00:15:30.408 write: IOPS=1317, BW=5271KiB/s (5397kB/s)(5276KiB/1001msec); 0 zone resets 00:15:30.408 slat (usec): min=8, max=273, avg=11.91, stdev=16.07 00:15:30.408 clat (usec): min=44, max=957, avg=324.57, stdev=92.32 00:15:30.408 lat (usec): min=218, max=968, avg=336.48, stdev=94.01 00:15:30.408 clat percentiles (usec): 00:15:30.408 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 262], 00:15:30.408 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 314], 60.00th=[ 326], 00:15:30.408 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 424], 95.00th=[ 537], 00:15:30.408 | 99.00th=[ 668], 99.50th=[ 709], 99.90th=[ 881], 99.95th=[ 955], 00:15:30.408 | 99.99th=[ 955] 00:15:30.408 bw ( KiB/s): min= 4640, max= 4640, per=35.07%, avg=4640.00, stdev= 0.00, samples=1 00:15:30.408 iops : min= 1160, max= 1160, avg=1160.00, stdev= 0.00, samples=1 00:15:30.408 lat (usec) : 50=0.04%, 100=0.09%, 250=8.24%, 500=74.73%, 750=9.94% 00:15:30.408 lat (usec) : 1000=6.83% 00:15:30.408 lat (msec) : 2=0.13% 00:15:30.408 cpu : usr=1.10%, sys=2.90%, ctx=2344, majf=0, minf=2 00:15:30.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.408 issued rwts: total=1024,1319,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.408 job1: (groupid=0, jobs=1): err= 0: pid=1677193: Sat Apr 27 00:49:22 2024 00:15:30.408 read: IOPS=279, BW=1119KiB/s (1146kB/s)(1120KiB/1001msec) 00:15:30.408 slat (nsec): min=4531, max=25342, avg=8418.62, stdev=2712.47 00:15:30.408 clat (usec): min=449, max=43060, avg=2938.72, stdev=9550.94 00:15:30.408 lat (usec): min=457, max=43073, avg=2947.13, stdev=9552.40 00:15:30.408 clat percentiles (usec): 00:15:30.408 | 1.00th=[ 453], 5.00th=[ 465], 10.00th=[ 478], 20.00th=[ 506], 00:15:30.408 | 30.00th=[ 519], 40.00th=[ 537], 50.00th=[ 562], 60.00th=[ 611], 00:15:30.408 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 938], 95.00th=[41681], 00:15:30.408 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:15:30.408 | 99.99th=[43254] 00:15:30.408 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:15:30.408 slat (nsec): min=8695, max=40565, avg=12073.95, stdev=1984.38 00:15:30.408 clat (usec): min=214, max=673, avg=325.50, stdev=85.97 00:15:30.408 lat (usec): min=230, max=714, avg=337.58, stdev=86.06 00:15:30.408 clat percentiles (usec): 00:15:30.408 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 249], 00:15:30.408 | 30.00th=[ 262], 40.00th=[ 277], 50.00th=[ 289], 60.00th=[ 343], 00:15:30.408 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 429], 95.00th=[ 519], 00:15:30.408 | 99.00th=[ 545], 99.50th=[ 545], 99.90th=[ 676], 99.95th=[ 676], 00:15:30.408 | 99.99th=[ 676] 00:15:30.408 bw ( KiB/s): min= 4096, max= 4096, per=30.96%, avg=4096.00, stdev= 0.00, samples=1 00:15:30.408 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:30.408 lat (usec) : 250=13.76%, 500=53.16%, 750=28.66%, 1000=2.02% 00:15:30.408 lat (msec) : 2=0.38%, 50=2.02% 00:15:30.408 cpu : usr=0.30%, sys=1.00%, ctx=792, majf=0, minf=1 00:15:30.408 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.408 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.408 issued rwts: total=280,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.408 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.408 job2: (groupid=0, jobs=1): err= 0: pid=1677194: Sat Apr 27 00:49:22 2024 00:15:30.408 read: IOPS=970, BW=3882KiB/s (3975kB/s)(3952KiB/1018msec) 00:15:30.408 slat (nsec): min=6416, max=48620, avg=10267.54, stdev=6466.16 00:15:30.408 clat (usec): min=382, max=42463, avg=728.22, stdev=2272.61 00:15:30.408 lat (usec): min=390, max=42470, avg=738.49, stdev=2273.15 00:15:30.408 clat percentiles (usec): 00:15:30.408 | 1.00th=[ 412], 5.00th=[ 457], 10.00th=[ 498], 20.00th=[ 537], 00:15:30.408 | 30.00th=[ 553], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 570], 00:15:30.408 | 70.00th=[ 578], 80.00th=[ 676], 90.00th=[ 799], 95.00th=[ 906], 00:15:30.408 | 99.00th=[ 979], 99.50th=[ 1074], 99.90th=[42206], 99.95th=[42206], 00:15:30.408 | 99.99th=[42206] 00:15:30.408 write: IOPS=1005, BW=4024KiB/s (4120kB/s)(4096KiB/1018msec); 0 zone resets 00:15:30.408 slat (nsec): min=9218, max=37311, avg=10319.63, stdev=1627.63 00:15:30.408 clat (usec): min=199, max=933, avg=265.04, stdev=78.51 00:15:30.408 lat (usec): min=209, max=942, avg=275.36, stdev=78.98 00:15:30.408 clat percentiles (usec): 00:15:30.408 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 208], 20.00th=[ 215], 00:15:30.408 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 241], 60.00th=[ 255], 00:15:30.408 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 347], 95.00th=[ 420], 00:15:30.408 | 99.00th=[ 537], 99.50th=[ 668], 99.90th=[ 865], 99.95th=[ 930], 00:15:30.408 | 99.99th=[ 930] 00:15:30.408 bw ( KiB/s): min= 4096, max= 4096, per=30.96%, avg=4096.00, stdev= 0.00, samples=2 00:15:30.409 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:15:30.409 lat (usec) : 250=27.83%, 500=26.99%, 750=37.43%, 1000=7.46% 00:15:30.409 lat (msec) : 2=0.15%, 50=0.15% 00:15:30.409 cpu : usr=1.67%, sys=1.57%, ctx=2013, majf=0, minf=1 00:15:30.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.409 issued rwts: total=988,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.409 job3: (groupid=0, jobs=1): err= 0: pid=1677195: Sat Apr 27 00:49:22 2024 00:15:30.409 read: IOPS=19, BW=79.7KiB/s (81.6kB/s)(80.0KiB/1004msec) 00:15:30.409 slat (nsec): min=7833, max=22912, avg=20929.35, stdev=3645.71 00:15:30.409 clat (usec): min=41058, max=42019, avg=41873.18, stdev=260.29 00:15:30.409 lat (usec): min=41072, max=42042, avg=41894.11, stdev=263.66 00:15:30.409 clat percentiles (usec): 00:15:30.409 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:15:30.409 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:15:30.409 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:30.409 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:30.409 | 99.99th=[42206] 00:15:30.409 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:15:30.409 slat (nsec): min=3486, max=43146, avg=9734.09, stdev=3767.16 00:15:30.409 clat (usec): min=198, max=1180, avg=311.05, stdev=124.53 00:15:30.409 lat (usec): min=208, max=1187, avg=320.78, stdev=125.32 00:15:30.409 clat percentiles (usec): 00:15:30.409 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:15:30.409 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 269], 60.00th=[ 277], 00:15:30.409 | 70.00th=[ 326], 80.00th=[ 400], 90.00th=[ 523], 95.00th=[ 537], 00:15:30.409 | 99.00th=[ 717], 99.50th=[ 840], 99.90th=[ 1188], 99.95th=[ 1188], 00:15:30.409 | 99.99th=[ 1188] 00:15:30.409 bw ( KiB/s): min= 4096, max= 4096, per=30.96%, avg=4096.00, stdev= 0.00, samples=1 00:15:30.409 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:30.409 lat (usec) : 250=41.92%, 500=42.48%, 750=10.90%, 1000=0.56% 00:15:30.409 lat (msec) : 2=0.38%, 50=3.76% 00:15:30.409 cpu : usr=0.40%, sys=0.40%, ctx=533, majf=0, minf=1 00:15:30.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:30.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.409 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:30.409 00:15:30.409 Run status group 0 (all jobs): 00:15:30.409 READ: bw=9084KiB/s (9303kB/s), 79.7KiB/s-4092KiB/s (81.6kB/s-4190kB/s), io=9248KiB (9470kB), run=1001-1018msec 00:15:30.409 WRITE: bw=12.9MiB/s (13.5MB/s), 2040KiB/s-5271KiB/s (2089kB/s-5397kB/s), io=13.2MiB (13.8MB), run=1001-1018msec 00:15:30.409 00:15:30.409 Disk stats (read/write): 00:15:30.409 nvme0n1: ios=1013/1024, merge=0/0, ticks=569/313, in_queue=882, util=89.38% 00:15:30.409 nvme0n2: ios=82/512, merge=0/0, ticks=789/164, in_queue=953, util=93.50% 00:15:30.409 nvme0n3: ios=975/1024, merge=0/0, ticks=859/267, in_queue=1126, util=98.02% 00:15:30.409 nvme0n4: ios=59/512, merge=0/0, ticks=1698/151, in_queue=1849, util=97.17% 00:15:30.409 00:49:22 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:30.409 [global] 00:15:30.409 thread=1 00:15:30.409 invalidate=1 00:15:30.409 rw=write 00:15:30.409 time_based=1 00:15:30.409 runtime=1 00:15:30.409 ioengine=libaio 00:15:30.409 direct=1 00:15:30.409 bs=4096 00:15:30.409 iodepth=128 00:15:30.409 norandommap=0 00:15:30.409 numjobs=1 00:15:30.409 00:15:30.409 verify_dump=1 00:15:30.409 verify_backlog=512 00:15:30.409 verify_state_save=0 00:15:30.409 do_verify=1 00:15:30.409 verify=crc32c-intel 00:15:30.409 [job0] 00:15:30.409 filename=/dev/nvme0n1 00:15:30.409 [job1] 00:15:30.409 filename=/dev/nvme0n2 00:15:30.409 [job2] 00:15:30.409 filename=/dev/nvme0n3 00:15:30.409 [job3] 00:15:30.409 filename=/dev/nvme0n4 00:15:30.409 Could not set queue depth (nvme0n1) 00:15:30.409 Could not set queue depth (nvme0n2) 00:15:30.409 Could not set queue depth (nvme0n3) 00:15:30.409 Could not set queue depth (nvme0n4) 00:15:30.668 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.668 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.668 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.668 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.668 fio-3.35 00:15:30.668 Starting 4 threads 00:15:32.044 00:15:32.044 job0: (groupid=0, jobs=1): err= 0: pid=1677572: Sat Apr 27 00:49:24 2024 00:15:32.044 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:15:32.044 slat (nsec): min=1011, max=71688k, avg=115900.41, stdev=1285995.36 00:15:32.044 clat (usec): min=7081, max=81680, avg=15440.04, stdev=9812.18 00:15:32.044 lat (usec): min=7759, max=81688, avg=15555.94, stdev=9866.72 00:15:32.044 clat percentiles (usec): 00:15:32.044 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11076], 00:15:32.044 | 30.00th=[11469], 40.00th=[12911], 50.00th=[13829], 60.00th=[14615], 00:15:32.044 | 70.00th=[15139], 80.00th=[16188], 90.00th=[19268], 95.00th=[25297], 00:15:32.044 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:15:32.044 | 99.99th=[81265] 00:15:32.044 write: IOPS=3644, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1002msec); 0 zone resets 00:15:32.044 slat (nsec): min=1983, max=45887k, avg=156083.10, stdev=1034630.49 00:15:32.044 clat (usec): min=822, max=85868, avg=19315.82, stdev=14197.70 00:15:32.044 lat (usec): min=6821, max=85876, avg=19471.90, stdev=14228.79 00:15:32.044 clat percentiles (usec): 00:15:32.044 | 1.00th=[ 6980], 5.00th=[ 8225], 10.00th=[ 9634], 20.00th=[10421], 00:15:32.044 | 30.00th=[11863], 40.00th=[14615], 50.00th=[16319], 60.00th=[17171], 00:15:32.044 | 70.00th=[18744], 80.00th=[21365], 90.00th=[27919], 95.00th=[61080], 00:15:32.044 | 99.00th=[81265], 99.50th=[84411], 99.90th=[85459], 99.95th=[85459], 00:15:32.044 | 99.99th=[85459] 00:15:32.044 bw ( KiB/s): min=12288, max=16384, per=23.18%, avg=14336.00, stdev=2896.31, samples=2 00:15:32.044 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:15:32.044 lat (usec) : 1000=0.01% 00:15:32.044 lat (msec) : 10=10.13%, 20=72.75%, 50=13.34%, 100=3.77% 00:15:32.044 cpu : usr=1.90%, sys=2.80%, ctx=677, majf=0, minf=1 00:15:32.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:15:32.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.044 issued rwts: total=3584,3652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.044 job1: (groupid=0, jobs=1): err= 0: pid=1677573: Sat Apr 27 00:49:24 2024 00:15:32.044 read: IOPS=4123, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1004msec) 00:15:32.044 slat (nsec): min=1439, max=5922.9k, avg=88084.95, stdev=515526.67 00:15:32.044 clat (usec): min=3002, max=24515, avg=11997.71, stdev=2706.59 00:15:32.044 lat (usec): min=3990, max=27549, avg=12085.79, stdev=2719.23 00:15:32.044 clat percentiles (usec): 00:15:32.044 | 1.00th=[ 6849], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[ 9896], 00:15:32.044 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11863], 60.00th=[12256], 00:15:32.044 | 70.00th=[12780], 80.00th=[13829], 90.00th=[15270], 95.00th=[16712], 00:15:32.044 | 99.00th=[20841], 99.50th=[22414], 99.90th=[23987], 99.95th=[23987], 00:15:32.044 | 99.99th=[24511] 00:15:32.044 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:15:32.044 slat (usec): min=2, max=8425, avg=134.33, stdev=475.34 00:15:32.044 clat (usec): min=6979, max=27256, avg=16764.55, stdev=3521.90 00:15:32.044 lat (usec): min=6987, max=27266, avg=16898.89, stdev=3546.11 00:15:32.044 clat percentiles (usec): 00:15:32.044 | 1.00th=[ 9241], 5.00th=[11338], 10.00th=[12256], 20.00th=[13566], 00:15:32.044 | 30.00th=[14615], 40.00th=[15926], 50.00th=[16581], 60.00th=[17433], 00:15:32.044 | 70.00th=[18744], 80.00th=[19530], 90.00th=[21365], 95.00th=[22938], 00:15:32.044 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26608], 99.95th=[26608], 00:15:32.044 | 99.99th=[27132] 00:15:32.044 bw ( KiB/s): min=17264, max=18936, per=29.27%, avg=18100.00, stdev=1182.28, samples=2 00:15:32.044 iops : min= 4316, max= 4734, avg=4525.00, stdev=295.57, samples=2 00:15:32.044 lat (msec) : 4=0.05%, 10=10.81%, 20=79.02%, 50=10.12% 00:15:32.044 cpu : usr=2.49%, sys=3.19%, ctx=804, majf=0, minf=1 00:15:32.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:32.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.044 issued rwts: total=4140,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.044 job2: (groupid=0, jobs=1): err= 0: pid=1677574: Sat Apr 27 00:49:24 2024 00:15:32.044 read: IOPS=4535, BW=17.7MiB/s (18.6MB/s)(18.0MiB/1016msec) 00:15:32.044 slat (nsec): min=1559, max=14748k, avg=111178.11, stdev=749256.01 00:15:32.044 clat (usec): min=6669, max=29978, avg=14388.40, stdev=3195.34 00:15:32.044 lat (usec): min=6676, max=29987, avg=14499.58, stdev=3254.34 00:15:32.044 clat percentiles (usec): 00:15:32.044 | 1.00th=[ 9241], 5.00th=[10683], 10.00th=[11076], 20.00th=[12256], 00:15:32.044 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13698], 60.00th=[14091], 00:15:32.044 | 70.00th=[14877], 80.00th=[16450], 90.00th=[19006], 95.00th=[20841], 00:15:32.044 | 99.00th=[23462], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:15:32.044 | 99.99th=[30016] 00:15:32.044 write: IOPS=4636, BW=18.1MiB/s (19.0MB/s)(18.4MiB/1016msec); 0 zone resets 00:15:32.044 slat (usec): min=2, max=10181, avg=98.56, stdev=463.35 00:15:32.044 clat (usec): min=2812, max=24189, avg=13271.87, stdev=3802.65 00:15:32.044 lat (usec): min=4425, max=24193, avg=13370.44, stdev=3820.66 00:15:32.044 clat percentiles (usec): 00:15:32.044 | 1.00th=[ 4948], 5.00th=[ 6587], 10.00th=[ 8586], 20.00th=[ 9634], 00:15:32.044 | 30.00th=[10814], 40.00th=[11731], 50.00th=[13435], 60.00th=[15008], 00:15:32.044 | 70.00th=[16057], 80.00th=[16909], 90.00th=[17957], 95.00th=[18482], 00:15:32.044 | 99.00th=[20579], 99.50th=[20841], 99.90th=[24249], 99.95th=[24249], 00:15:32.044 | 99.99th=[24249] 00:15:32.044 bw ( KiB/s): min=18456, max=18576, per=29.94%, avg=18516.00, stdev=84.85, samples=2 00:15:32.044 iops : min= 4614, max= 4644, avg=4629.00, stdev=21.21, samples=2 00:15:32.044 lat (msec) : 4=0.02%, 10=13.39%, 20=82.78%, 50=3.81% 00:15:32.044 cpu : usr=2.76%, sys=4.93%, ctx=639, majf=0, minf=1 00:15:32.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:32.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.044 issued rwts: total=4608,4711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.044 job3: (groupid=0, jobs=1): err= 0: pid=1677575: Sat Apr 27 00:49:24 2024 00:15:32.044 read: IOPS=2519, BW=9.84MiB/s (10.3MB/s)(10.0MiB/1016msec) 00:15:32.044 slat (nsec): min=1337, max=69624k, avg=222920.11, stdev=2703828.46 00:15:32.044 clat (usec): min=6459, max=91249, avg=28024.87, stdev=24752.18 00:15:32.044 lat (usec): min=6466, max=91252, avg=28247.79, stdev=24847.62 00:15:32.044 clat percentiles (usec): 00:15:32.044 | 1.00th=[ 9110], 5.00th=[10814], 10.00th=[11469], 20.00th=[12518], 00:15:32.044 | 30.00th=[13304], 40.00th=[13698], 50.00th=[15926], 60.00th=[17695], 00:15:32.044 | 70.00th=[21627], 80.00th=[52167], 90.00th=[77071], 95.00th=[79168], 00:15:32.044 | 99.00th=[85459], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:15:32.044 | 99.99th=[91751] 00:15:32.044 write: IOPS=2691, BW=10.5MiB/s (11.0MB/s)(10.7MiB/1016msec); 0 zone resets 00:15:32.044 slat (usec): min=2, max=76451, avg=157.46, stdev=1960.62 00:15:32.044 clat (usec): min=3879, max=94046, avg=20879.72, stdev=18386.71 00:15:32.044 lat (usec): min=3956, max=94056, avg=21037.18, stdev=18449.84 00:15:32.044 clat percentiles (usec): 00:15:32.044 | 1.00th=[ 5342], 5.00th=[ 7439], 10.00th=[ 8225], 20.00th=[10159], 00:15:32.044 | 30.00th=[11207], 40.00th=[12256], 50.00th=[13304], 60.00th=[15533], 00:15:32.044 | 70.00th=[17957], 80.00th=[28181], 90.00th=[46924], 95.00th=[53216], 00:15:32.044 | 99.00th=[87557], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:15:32.044 | 99.99th=[93848] 00:15:32.044 bw ( KiB/s): min= 9984, max=10872, per=16.86%, avg=10428.00, stdev=627.91, samples=2 00:15:32.044 iops : min= 2496, max= 2718, avg=2607.00, stdev=156.98, samples=2 00:15:32.044 lat (msec) : 4=0.25%, 10=9.75%, 20=60.66%, 50=14.71%, 100=14.64% 00:15:32.044 cpu : usr=1.48%, sys=2.07%, ctx=231, majf=0, minf=1 00:15:32.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:32.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:32.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:32.044 issued rwts: total=2560,2735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:32.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:32.044 00:15:32.044 Run status group 0 (all jobs): 00:15:32.044 READ: bw=57.3MiB/s (60.0MB/s), 9.84MiB/s-17.7MiB/s (10.3MB/s-18.6MB/s), io=58.2MiB (61.0MB), run=1002-1016msec 00:15:32.044 WRITE: bw=60.4MiB/s (63.3MB/s), 10.5MiB/s-18.1MiB/s (11.0MB/s-19.0MB/s), io=61.4MiB (64.3MB), run=1002-1016msec 00:15:32.044 00:15:32.044 Disk stats (read/write): 00:15:32.044 nvme0n1: ios=2950/3072, merge=0/0, ticks=12965/15160, in_queue=28125, util=96.39% 00:15:32.044 nvme0n2: ios=3605/3877, merge=0/0, ticks=22327/30031, in_queue=52358, util=97.26% 00:15:32.044 nvme0n3: ios=3728/4096, merge=0/0, ticks=54010/52092, in_queue=106102, util=98.65% 00:15:32.044 nvme0n4: ios=2071/2465, merge=0/0, ticks=56092/50865, in_queue=106957, util=95.60% 00:15:32.044 00:49:24 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:32.044 [global] 00:15:32.044 thread=1 00:15:32.044 invalidate=1 00:15:32.044 rw=randwrite 00:15:32.044 time_based=1 00:15:32.044 runtime=1 00:15:32.044 ioengine=libaio 00:15:32.044 direct=1 00:15:32.044 bs=4096 00:15:32.044 iodepth=128 00:15:32.044 norandommap=0 00:15:32.044 numjobs=1 00:15:32.044 00:15:32.044 verify_dump=1 00:15:32.044 verify_backlog=512 00:15:32.044 verify_state_save=0 00:15:32.044 do_verify=1 00:15:32.044 verify=crc32c-intel 00:15:32.044 [job0] 00:15:32.044 filename=/dev/nvme0n1 00:15:32.044 [job1] 00:15:32.044 filename=/dev/nvme0n2 00:15:32.044 [job2] 00:15:32.044 filename=/dev/nvme0n3 00:15:32.044 [job3] 00:15:32.044 filename=/dev/nvme0n4 00:15:32.044 Could not set queue depth (nvme0n1) 00:15:32.044 Could not set queue depth (nvme0n2) 00:15:32.044 Could not set queue depth (nvme0n3) 00:15:32.045 Could not set queue depth (nvme0n4) 00:15:32.045 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.045 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.045 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.045 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:32.045 fio-3.35 00:15:32.045 Starting 4 threads 00:15:33.421 00:15:33.421 job0: (groupid=0, jobs=1): err= 0: pid=1677940: Sat Apr 27 00:49:25 2024 00:15:33.421 read: IOPS=4068, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:15:33.421 slat (nsec): min=1073, max=15827k, avg=118047.61, stdev=737476.09 00:15:33.421 clat (usec): min=1189, max=31746, avg=15378.93, stdev=4239.47 00:15:33.421 lat (usec): min=5486, max=34826, avg=15496.98, stdev=4275.75 00:15:33.421 clat percentiles (usec): 00:15:33.421 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11994], 00:15:33.421 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14484], 60.00th=[15401], 00:15:33.421 | 70.00th=[16712], 80.00th=[18744], 90.00th=[21103], 95.00th=[23462], 00:15:33.421 | 99.00th=[27395], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:15:33.421 | 99.99th=[31851] 00:15:33.421 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:15:33.421 slat (nsec): min=1866, max=11625k, avg=121562.07, stdev=593470.16 00:15:33.421 clat (usec): min=1187, max=31877, avg=15748.98, stdev=4878.94 00:15:33.421 lat (usec): min=1197, max=31885, avg=15870.54, stdev=4911.66 00:15:33.421 clat percentiles (usec): 00:15:33.421 | 1.00th=[ 6325], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11994], 00:15:33.421 | 30.00th=[12387], 40.00th=[12911], 50.00th=[14091], 60.00th=[16319], 00:15:33.421 | 70.00th=[18220], 80.00th=[19792], 90.00th=[23462], 95.00th=[25035], 00:15:33.421 | 99.00th=[28181], 99.50th=[28967], 99.90th=[30802], 99.95th=[30802], 00:15:33.421 | 99.99th=[31851] 00:15:33.421 bw ( KiB/s): min=12288, max=20480, per=24.32%, avg=16384.00, stdev=5792.62, samples=2 00:15:33.421 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:15:33.421 lat (msec) : 2=0.04%, 10=6.01%, 20=76.52%, 50=17.43% 00:15:33.421 cpu : usr=1.50%, sys=3.59%, ctx=642, majf=0, minf=1 00:15:33.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:33.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.421 issued rwts: total=4085,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.421 job1: (groupid=0, jobs=1): err= 0: pid=1677941: Sat Apr 27 00:49:25 2024 00:15:33.421 read: IOPS=4442, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1003msec) 00:15:33.421 slat (nsec): min=999, max=17376k, avg=110959.59, stdev=704650.18 00:15:33.421 clat (usec): min=1619, max=36381, avg=13985.42, stdev=4154.26 00:15:33.421 lat (usec): min=1918, max=36389, avg=14096.38, stdev=4185.67 00:15:33.421 clat percentiles (usec): 00:15:33.421 | 1.00th=[ 4621], 5.00th=[ 9110], 10.00th=[10552], 20.00th=[11731], 00:15:33.421 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13304], 60.00th=[13698], 00:15:33.421 | 70.00th=[14091], 80.00th=[15664], 90.00th=[18482], 95.00th=[22152], 00:15:33.421 | 99.00th=[30540], 99.50th=[32900], 99.90th=[36439], 99.95th=[36439], 00:15:33.421 | 99.99th=[36439] 00:15:33.421 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:15:33.421 slat (nsec): min=1757, max=8967.0k, avg=106040.13, stdev=577739.56 00:15:33.421 clat (usec): min=2631, max=36382, avg=14071.89, stdev=4081.11 00:15:33.421 lat (usec): min=2641, max=36400, avg=14177.93, stdev=4091.40 00:15:33.421 clat percentiles (usec): 00:15:33.421 | 1.00th=[ 6849], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10945], 00:15:33.421 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13304], 60.00th=[14222], 00:15:33.421 | 70.00th=[15401], 80.00th=[16909], 90.00th=[19792], 95.00th=[22938], 00:15:33.421 | 99.00th=[26346], 99.50th=[26346], 99.90th=[27657], 99.95th=[29230], 00:15:33.421 | 99.99th=[36439] 00:15:33.421 bw ( KiB/s): min=16384, max=20480, per=27.36%, avg=18432.00, stdev=2896.31, samples=2 00:15:33.421 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:15:33.421 lat (msec) : 2=0.20%, 4=0.24%, 10=9.43%, 20=81.69%, 50=8.44% 00:15:33.421 cpu : usr=2.20%, sys=2.99%, ctx=612, majf=0, minf=1 00:15:33.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:33.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.421 issued rwts: total=4456,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.421 job2: (groupid=0, jobs=1): err= 0: pid=1677942: Sat Apr 27 00:49:25 2024 00:15:33.421 read: IOPS=3705, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1002msec) 00:15:33.421 slat (nsec): min=1040, max=11187k, avg=123372.01, stdev=745002.17 00:15:33.421 clat (usec): min=959, max=37141, avg=16231.17, stdev=5286.14 00:15:33.421 lat (usec): min=4295, max=37150, avg=16354.54, stdev=5317.77 00:15:33.421 clat percentiles (usec): 00:15:33.421 | 1.00th=[ 6587], 5.00th=[10290], 10.00th=[12125], 20.00th=[13698], 00:15:33.421 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:15:33.421 | 70.00th=[15926], 80.00th=[17433], 90.00th=[21365], 95.00th=[31065], 00:15:33.421 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:15:33.421 | 99.99th=[36963] 00:15:33.421 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:15:33.421 slat (nsec): min=1991, max=10051k, avg=128755.66, stdev=767838.94 00:15:33.421 clat (usec): min=1758, max=42087, avg=16304.91, stdev=4741.48 00:15:33.421 lat (usec): min=1777, max=42090, avg=16433.66, stdev=4774.83 00:15:33.421 clat percentiles (usec): 00:15:33.421 | 1.00th=[ 7308], 5.00th=[10159], 10.00th=[11469], 20.00th=[12518], 00:15:33.421 | 30.00th=[13698], 40.00th=[14353], 50.00th=[15139], 60.00th=[16581], 00:15:33.421 | 70.00th=[17695], 80.00th=[20055], 90.00th=[22938], 95.00th=[24249], 00:15:33.421 | 99.00th=[28443], 99.50th=[36439], 99.90th=[40633], 99.95th=[42206], 00:15:33.421 | 99.99th=[42206] 00:15:33.421 bw ( KiB/s): min=14864, max=17904, per=24.32%, avg=16384.00, stdev=2149.60, samples=2 00:15:33.421 iops : min= 3716, max= 4476, avg=4096.00, stdev=537.40, samples=2 00:15:33.421 lat (usec) : 1000=0.01% 00:15:33.421 lat (msec) : 2=0.04%, 10=4.02%, 20=79.17%, 50=16.76% 00:15:33.421 cpu : usr=1.80%, sys=3.20%, ctx=473, majf=0, minf=1 00:15:33.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:33.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.421 issued rwts: total=3713,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.421 job3: (groupid=0, jobs=1): err= 0: pid=1677943: Sat Apr 27 00:49:25 2024 00:15:33.421 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:15:33.421 slat (nsec): min=1138, max=15070k, avg=112219.08, stdev=709360.90 00:15:33.421 clat (usec): min=4355, max=29227, avg=15801.13, stdev=3265.81 00:15:33.421 lat (usec): min=4360, max=34478, avg=15913.35, stdev=3285.45 00:15:33.421 clat percentiles (usec): 00:15:33.421 | 1.00th=[10028], 5.00th=[10814], 10.00th=[12125], 20.00th=[13042], 00:15:33.421 | 30.00th=[14091], 40.00th=[14615], 50.00th=[15139], 60.00th=[16319], 00:15:33.421 | 70.00th=[17171], 80.00th=[18220], 90.00th=[20841], 95.00th=[21890], 00:15:33.421 | 99.00th=[24511], 99.50th=[25560], 99.90th=[27395], 99.95th=[27657], 00:15:33.421 | 99.99th=[29230] 00:15:33.421 write: IOPS=4181, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1011msec); 0 zone resets 00:15:33.421 slat (nsec): min=1938, max=11144k, avg=112958.99, stdev=611793.64 00:15:33.421 clat (usec): min=3462, max=52579, avg=14845.35, stdev=3851.56 00:15:33.421 lat (usec): min=3470, max=52582, avg=14958.31, stdev=3892.93 00:15:33.421 clat percentiles (usec): 00:15:33.421 | 1.00th=[ 6718], 5.00th=[ 8717], 10.00th=[10552], 20.00th=[11863], 00:15:33.421 | 30.00th=[13173], 40.00th=[13960], 50.00th=[14222], 60.00th=[15008], 00:15:33.421 | 70.00th=[16909], 80.00th=[17957], 90.00th=[19530], 95.00th=[20579], 00:15:33.421 | 99.00th=[26608], 99.50th=[27395], 99.90th=[40109], 99.95th=[40109], 00:15:33.421 | 99.99th=[52691] 00:15:33.421 bw ( KiB/s): min=15576, max=17248, per=24.36%, avg=16412.00, stdev=1182.28, samples=2 00:15:33.421 iops : min= 3894, max= 4312, avg=4103.00, stdev=295.57, samples=2 00:15:33.421 lat (msec) : 4=0.14%, 10=4.88%, 20=84.83%, 50=10.14%, 100=0.01% 00:15:33.421 cpu : usr=2.18%, sys=4.26%, ctx=525, majf=0, minf=1 00:15:33.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:33.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.421 issued rwts: total=4096,4228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.421 00:15:33.421 Run status group 0 (all jobs): 00:15:33.421 READ: bw=63.2MiB/s (66.2MB/s), 14.5MiB/s-17.4MiB/s (15.2MB/s-18.2MB/s), io=63.9MiB (67.0MB), run=1002-1011msec 00:15:33.421 WRITE: bw=65.8MiB/s (69.0MB/s), 15.9MiB/s-17.9MiB/s (16.7MB/s-18.8MB/s), io=66.5MiB (69.7MB), run=1002-1011msec 00:15:33.421 00:15:33.421 Disk stats (read/write): 00:15:33.421 nvme0n1: ios=3249/3584, merge=0/0, ticks=24251/23671, in_queue=47922, util=84.55% 00:15:33.422 nvme0n2: ios=3634/3826, merge=0/0, ticks=23933/26451, in_queue=50384, util=87.46% 00:15:33.422 nvme0n3: ios=3156/3584, merge=0/0, ticks=21768/25785, in_queue=47553, util=93.18% 00:15:33.422 nvme0n4: ios=3188/3584, merge=0/0, ticks=24255/22668, in_queue=46923, util=97.69% 00:15:33.422 00:49:25 -- target/fio.sh@55 -- # sync 00:15:33.422 00:49:25 -- target/fio.sh@59 -- # fio_pid=1678175 00:15:33.422 00:49:25 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:33.422 00:49:25 -- target/fio.sh@61 -- # sleep 3 00:15:33.422 [global] 00:15:33.422 thread=1 00:15:33.422 invalidate=1 00:15:33.422 rw=read 00:15:33.422 time_based=1 00:15:33.422 runtime=10 00:15:33.422 ioengine=libaio 00:15:33.422 direct=1 00:15:33.422 bs=4096 00:15:33.422 iodepth=1 00:15:33.422 norandommap=1 00:15:33.422 numjobs=1 00:15:33.422 00:15:33.422 [job0] 00:15:33.422 filename=/dev/nvme0n1 00:15:33.422 [job1] 00:15:33.422 filename=/dev/nvme0n2 00:15:33.422 [job2] 00:15:33.422 filename=/dev/nvme0n3 00:15:33.422 [job3] 00:15:33.422 filename=/dev/nvme0n4 00:15:33.422 Could not set queue depth (nvme0n1) 00:15:33.422 Could not set queue depth (nvme0n2) 00:15:33.422 Could not set queue depth (nvme0n3) 00:15:33.422 Could not set queue depth (nvme0n4) 00:15:33.680 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:33.680 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:33.681 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:33.681 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:33.681 fio-3.35 00:15:33.681 Starting 4 threads 00:15:36.966 00:49:28 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:36.966 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=11354112, buflen=4096 00:15:36.966 fio: pid=1678323, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:36.966 00:49:29 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:36.966 00:49:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:36.966 00:49:29 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:36.966 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=299008, buflen=4096 00:15:36.966 fio: pid=1678322, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:36.966 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=17752064, buflen=4096 00:15:36.966 fio: pid=1678320, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:36.966 00:49:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:36.966 00:49:29 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:37.225 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=323584, buflen=4096 00:15:37.225 fio: pid=1678321, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:15:37.225 00:49:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.225 00:49:29 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:37.225 00:15:37.225 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1678320: Sat Apr 27 00:49:29 2024 00:15:37.225 read: IOPS=1409, BW=5636KiB/s (5771kB/s)(16.9MiB/3076msec) 00:15:37.225 slat (nsec): min=7085, max=59885, avg=8665.00, stdev=2098.85 00:15:37.225 clat (usec): min=317, max=42065, avg=693.97, stdev=2982.56 00:15:37.225 lat (usec): min=325, max=42087, avg=702.63, stdev=2983.25 00:15:37.225 clat percentiles (usec): 00:15:37.225 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 383], 20.00th=[ 412], 00:15:37.225 | 30.00th=[ 437], 40.00th=[ 453], 50.00th=[ 465], 60.00th=[ 482], 00:15:37.225 | 70.00th=[ 502], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 635], 00:15:37.225 | 99.00th=[ 816], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:15:37.225 | 99.99th=[42206] 00:15:37.225 bw ( KiB/s): min= 4584, max= 7896, per=75.77%, avg=6748.80, stdev=1368.23, samples=5 00:15:37.225 iops : min= 1146, max= 1974, avg=1687.20, stdev=342.06, samples=5 00:15:37.225 lat (usec) : 500=69.83%, 750=28.90%, 1000=0.58% 00:15:37.225 lat (msec) : 2=0.12%, 10=0.02%, 50=0.53% 00:15:37.225 cpu : usr=0.98%, sys=2.24%, ctx=4343, majf=0, minf=1 00:15:37.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:37.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.225 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.225 issued rwts: total=4335,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:37.225 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1678321: Sat Apr 27 00:49:29 2024 00:15:37.225 read: IOPS=24, BW=96.9KiB/s (99.3kB/s)(316KiB/3260msec) 00:15:37.225 slat (usec): min=9, max=13574, avg=272.49, stdev=1668.97 00:15:37.225 clat (usec): min=900, max=45563, avg=40977.69, stdev=6510.25 00:15:37.225 lat (usec): min=921, max=55004, avg=41171.91, stdev=6697.77 00:15:37.225 clat percentiles (usec): 00:15:37.225 | 1.00th=[ 898], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:15:37.225 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:15:37.225 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:15:37.225 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:15:37.225 | 99.99th=[45351] 00:15:37.225 bw ( KiB/s): min= 92, max= 104, per=1.08%, avg=96.67, stdev= 3.93, samples=6 00:15:37.225 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:15:37.225 lat (usec) : 1000=1.25% 00:15:37.225 lat (msec) : 2=1.25%, 50=96.25% 00:15:37.225 cpu : usr=0.09%, sys=0.18%, ctx=83, majf=0, minf=1 00:15:37.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:37.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.225 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.225 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:37.225 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1678322: Sat Apr 27 00:49:29 2024 00:15:37.225 read: IOPS=25, BW=100KiB/s (102kB/s)(292KiB/2918msec) 00:15:37.225 slat (nsec): min=9233, max=35723, avg=21942.45, stdev=2847.80 00:15:37.225 clat (usec): min=777, max=43127, avg=39653.31, stdev=9404.57 00:15:37.225 lat (usec): min=790, max=43149, avg=39675.25, stdev=9404.29 00:15:37.225 clat percentiles (usec): 00:15:37.225 | 1.00th=[ 775], 5.00th=[ 1090], 10.00th=[41157], 20.00th=[41681], 00:15:37.225 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:15:37.225 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:37.225 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:15:37.225 | 99.99th=[43254] 00:15:37.225 bw ( KiB/s): min= 96, max= 120, per=1.12%, avg=100.80, stdev=10.73, samples=5 00:15:37.225 iops : min= 24, max= 30, avg=25.20, stdev= 2.68, samples=5 00:15:37.225 lat (usec) : 1000=4.05% 00:15:37.225 lat (msec) : 2=1.35%, 50=93.24% 00:15:37.225 cpu : usr=0.00%, sys=0.07%, ctx=74, majf=0, minf=1 00:15:37.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:37.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.225 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.225 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:37.225 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1678323: Sat Apr 27 00:49:29 2024 00:15:37.225 read: IOPS=1024, BW=4096KiB/s (4194kB/s)(10.8MiB/2707msec) 00:15:37.225 slat (nsec): min=5438, max=94288, avg=8472.47, stdev=2727.61 00:15:37.225 clat (usec): min=317, max=43100, avg=958.09, stdev=4496.48 00:15:37.225 lat (usec): min=326, max=43115, avg=966.56, stdev=4496.82 00:15:37.225 clat percentiles (usec): 00:15:37.225 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 392], 00:15:37.225 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 429], 60.00th=[ 441], 00:15:37.225 | 70.00th=[ 465], 80.00th=[ 510], 90.00th=[ 627], 95.00th=[ 766], 00:15:37.225 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:37.225 | 99.99th=[43254] 00:15:37.225 bw ( KiB/s): min= 96, max= 7840, per=49.71%, avg=4427.20, stdev=3531.71, samples=5 00:15:37.225 iops : min= 24, max= 1960, avg=1106.80, stdev=882.93, samples=5 00:15:37.225 lat (usec) : 500=78.11%, 750=16.19%, 1000=4.22% 00:15:37.225 lat (msec) : 2=0.22%, 4=0.04%, 50=1.19% 00:15:37.225 cpu : usr=0.22%, sys=1.11%, ctx=2774, majf=0, minf=2 00:15:37.225 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:37.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.225 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.225 issued rwts: total=2773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.225 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:37.225 00:15:37.225 Run status group 0 (all jobs): 00:15:37.225 READ: bw=8906KiB/s (9119kB/s), 96.9KiB/s-5636KiB/s (99.3kB/s-5771kB/s), io=28.4MiB (29.7MB), run=2707-3260msec 00:15:37.225 00:15:37.225 Disk stats (read/write): 00:15:37.225 nvme0n1: ios=4369/0, merge=0/0, ticks=3539/0, in_queue=3539, util=98.80% 00:15:37.225 nvme0n2: ios=75/0, merge=0/0, ticks=3068/0, in_queue=3068, util=95.70% 00:15:37.225 nvme0n3: ios=72/0, merge=0/0, ticks=2855/0, in_queue=2855, util=96.52% 00:15:37.225 nvme0n4: ios=2770/0, merge=0/0, ticks=2557/0, in_queue=2557, util=96.41% 00:15:37.483 00:49:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.483 00:49:29 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:37.483 00:49:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.483 00:49:30 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:37.741 00:49:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.741 00:49:30 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:38.000 00:49:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:38.000 00:49:30 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:38.000 00:49:30 -- target/fio.sh@69 -- # fio_status=0 00:15:38.000 00:49:30 -- target/fio.sh@70 -- # wait 1678175 00:15:38.000 00:49:30 -- target/fio.sh@70 -- # fio_status=4 00:15:38.000 00:49:30 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:38.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.258 00:49:30 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:38.258 00:49:30 -- common/autotest_common.sh@1205 -- # local i=0 00:15:38.258 00:49:30 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:38.258 00:49:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.258 00:49:30 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:38.258 00:49:30 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.258 00:49:30 -- common/autotest_common.sh@1217 -- # return 0 00:15:38.258 00:49:30 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:38.258 00:49:30 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:38.258 nvmf hotplug test: fio failed as expected 00:15:38.258 00:49:30 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.516 00:49:30 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:38.516 00:49:30 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:38.516 00:49:30 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:38.516 00:49:30 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:38.516 00:49:30 -- target/fio.sh@91 -- # nvmftestfini 00:15:38.516 00:49:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:38.516 00:49:30 -- nvmf/common.sh@117 -- # sync 00:15:38.516 00:49:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.516 00:49:31 -- nvmf/common.sh@120 -- # set +e 00:15:38.516 00:49:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.516 00:49:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.516 rmmod nvme_tcp 00:15:38.516 rmmod nvme_fabrics 00:15:38.516 rmmod nvme_keyring 00:15:38.516 00:49:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.516 00:49:31 -- nvmf/common.sh@124 -- # set -e 00:15:38.516 00:49:31 -- nvmf/common.sh@125 -- # return 0 00:15:38.516 00:49:31 -- nvmf/common.sh@478 -- # '[' -n 1675255 ']' 00:15:38.516 00:49:31 -- nvmf/common.sh@479 -- # killprocess 1675255 00:15:38.516 00:49:31 -- common/autotest_common.sh@936 -- # '[' -z 1675255 ']' 00:15:38.516 00:49:31 -- common/autotest_common.sh@940 -- # kill -0 1675255 00:15:38.516 00:49:31 -- common/autotest_common.sh@941 -- # uname 00:15:38.516 00:49:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.517 00:49:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1675255 00:15:38.517 00:49:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:38.517 00:49:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:38.517 00:49:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1675255' 00:15:38.517 killing process with pid 1675255 00:15:38.517 00:49:31 -- common/autotest_common.sh@955 -- # kill 1675255 00:15:38.517 00:49:31 -- common/autotest_common.sh@960 -- # wait 1675255 00:15:38.776 00:49:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:38.776 00:49:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:38.776 00:49:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:38.776 00:49:31 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.776 00:49:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.776 00:49:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.776 00:49:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.776 00:49:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.679 00:49:33 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.937 00:15:40.937 real 0m25.923s 00:15:40.937 user 1m45.256s 00:15:40.937 sys 0m7.173s 00:15:40.937 00:49:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:40.937 00:49:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.937 ************************************ 00:15:40.937 END TEST nvmf_fio_target 00:15:40.937 ************************************ 00:15:40.937 00:49:33 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:40.937 00:49:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:40.937 00:49:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.937 00:49:33 -- common/autotest_common.sh@10 -- # set +x 00:15:40.937 ************************************ 00:15:40.937 START TEST nvmf_bdevio 00:15:40.937 ************************************ 00:15:40.937 00:49:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:40.937 * Looking for test storage... 00:15:40.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.937 00:49:33 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.937 00:49:33 -- nvmf/common.sh@7 -- # uname -s 00:15:40.937 00:49:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.937 00:49:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.937 00:49:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.937 00:49:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.937 00:49:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.937 00:49:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.937 00:49:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.937 00:49:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.937 00:49:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.937 00:49:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.937 00:49:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.937 00:49:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.937 00:49:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.937 00:49:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.937 00:49:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.937 00:49:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.937 00:49:33 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.937 00:49:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.937 00:49:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.937 00:49:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.937 00:49:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.938 00:49:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.938 00:49:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.938 00:49:33 -- paths/export.sh@5 -- # export PATH 00:15:40.938 00:49:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.938 00:49:33 -- nvmf/common.sh@47 -- # : 0 00:15:40.938 00:49:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.938 00:49:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.938 00:49:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.938 00:49:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.938 00:49:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.938 00:49:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.938 00:49:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.938 00:49:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.938 00:49:33 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.938 00:49:33 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.938 00:49:33 -- target/bdevio.sh@14 -- # nvmftestinit 00:15:40.938 00:49:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:40.938 00:49:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.938 00:49:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:40.938 00:49:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:40.938 00:49:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:40.938 00:49:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.938 00:49:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.938 00:49:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.938 00:49:33 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:40.938 00:49:33 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:40.938 00:49:33 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:40.938 00:49:33 -- common/autotest_common.sh@10 -- # set +x 00:15:46.208 00:49:38 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:46.208 00:49:38 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:46.208 00:49:38 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:46.208 00:49:38 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:46.208 00:49:38 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:46.208 00:49:38 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:46.208 00:49:38 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:46.208 00:49:38 -- nvmf/common.sh@295 -- # net_devs=() 00:15:46.208 00:49:38 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:46.208 00:49:38 -- nvmf/common.sh@296 -- # e810=() 00:15:46.208 00:49:38 -- nvmf/common.sh@296 -- # local -ga e810 00:15:46.208 00:49:38 -- nvmf/common.sh@297 -- # x722=() 00:15:46.208 00:49:38 -- nvmf/common.sh@297 -- # local -ga x722 00:15:46.208 00:49:38 -- nvmf/common.sh@298 -- # mlx=() 00:15:46.208 00:49:38 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:46.208 00:49:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.208 00:49:38 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:46.208 00:49:38 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:46.209 00:49:38 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:46.209 00:49:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.209 00:49:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:46.209 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:46.209 00:49:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.209 00:49:38 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:46.209 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:46.209 00:49:38 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:46.209 00:49:38 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.209 00:49:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.209 00:49:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:46.209 00:49:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.209 00:49:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:46.209 Found net devices under 0000:86:00.0: cvl_0_0 00:15:46.209 00:49:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.209 00:49:38 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.209 00:49:38 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.209 00:49:38 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:46.209 00:49:38 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.209 00:49:38 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:46.209 Found net devices under 0000:86:00.1: cvl_0_1 00:15:46.209 00:49:38 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.209 00:49:38 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:46.209 00:49:38 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:46.209 00:49:38 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:46.209 00:49:38 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.209 00:49:38 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.209 00:49:38 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:46.209 00:49:38 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:46.209 00:49:38 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:46.209 00:49:38 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:46.209 00:49:38 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:46.209 00:49:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:46.209 00:49:38 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.209 00:49:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:46.209 00:49:38 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:46.209 00:49:38 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:46.209 00:49:38 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:46.209 00:49:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:46.209 00:49:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:46.209 00:49:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:46.209 00:49:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:46.209 00:49:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:46.209 00:49:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:46.209 00:49:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:46.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:15:46.209 00:15:46.209 --- 10.0.0.2 ping statistics --- 00:15:46.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.209 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:15:46.209 00:49:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:46.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:15:46.209 00:15:46.209 --- 10.0.0.1 ping statistics --- 00:15:46.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.209 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:15:46.209 00:49:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.209 00:49:38 -- nvmf/common.sh@411 -- # return 0 00:15:46.209 00:49:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:46.209 00:49:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.209 00:49:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:46.209 00:49:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.209 00:49:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:46.209 00:49:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:46.209 00:49:38 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:46.209 00:49:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:46.209 00:49:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:46.209 00:49:38 -- common/autotest_common.sh@10 -- # set +x 00:15:46.209 00:49:38 -- nvmf/common.sh@470 -- # nvmfpid=1682554 00:15:46.209 00:49:38 -- nvmf/common.sh@471 -- # waitforlisten 1682554 00:15:46.209 00:49:38 -- common/autotest_common.sh@817 -- # '[' -z 1682554 ']' 00:15:46.209 00:49:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.209 00:49:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:46.209 00:49:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.209 00:49:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:46.209 00:49:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:46.209 00:49:38 -- common/autotest_common.sh@10 -- # set +x 00:15:46.209 [2024-04-27 00:49:38.748411] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:46.209 [2024-04-27 00:49:38.748457] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.209 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.209 [2024-04-27 00:49:38.804117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.209 [2024-04-27 00:49:38.881778] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.209 [2024-04-27 00:49:38.881814] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.209 [2024-04-27 00:49:38.881821] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.209 [2024-04-27 00:49:38.881827] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.209 [2024-04-27 00:49:38.881833] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.209 [2024-04-27 00:49:38.881882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:46.209 [2024-04-27 00:49:38.881991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:46.209 [2024-04-27 00:49:38.882109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.209 [2024-04-27 00:49:38.882110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:47.144 00:49:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:47.144 00:49:39 -- common/autotest_common.sh@850 -- # return 0 00:15:47.144 00:49:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:47.144 00:49:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:47.144 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.144 00:49:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.144 00:49:39 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:47.144 00:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.144 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.144 [2024-04-27 00:49:39.604905] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.144 00:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.144 00:49:39 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:47.144 00:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.144 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.144 Malloc0 00:15:47.144 00:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.144 00:49:39 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:47.144 00:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.144 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.144 00:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.144 00:49:39 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:47.144 00:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.144 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.144 00:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.144 00:49:39 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.144 00:49:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.144 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.144 [2024-04-27 00:49:39.656602] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.144 00:49:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.144 00:49:39 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:47.144 00:49:39 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:47.144 00:49:39 -- nvmf/common.sh@521 -- # config=() 00:15:47.144 00:49:39 -- nvmf/common.sh@521 -- # local subsystem config 00:15:47.144 00:49:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:47.144 00:49:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:47.144 { 00:15:47.144 "params": { 00:15:47.144 "name": "Nvme$subsystem", 00:15:47.144 "trtype": "$TEST_TRANSPORT", 00:15:47.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:47.144 "adrfam": "ipv4", 00:15:47.144 "trsvcid": "$NVMF_PORT", 00:15:47.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:47.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:47.144 "hdgst": ${hdgst:-false}, 00:15:47.144 "ddgst": ${ddgst:-false} 00:15:47.144 }, 00:15:47.144 "method": "bdev_nvme_attach_controller" 00:15:47.144 } 00:15:47.144 EOF 00:15:47.144 )") 00:15:47.144 00:49:39 -- nvmf/common.sh@543 -- # cat 00:15:47.144 00:49:39 -- nvmf/common.sh@545 -- # jq . 00:15:47.144 00:49:39 -- nvmf/common.sh@546 -- # IFS=, 00:15:47.144 00:49:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:47.144 "params": { 00:15:47.144 "name": "Nvme1", 00:15:47.144 "trtype": "tcp", 00:15:47.144 "traddr": "10.0.0.2", 00:15:47.144 "adrfam": "ipv4", 00:15:47.144 "trsvcid": "4420", 00:15:47.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:47.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:47.144 "hdgst": false, 00:15:47.144 "ddgst": false 00:15:47.144 }, 00:15:47.144 "method": "bdev_nvme_attach_controller" 00:15:47.144 }' 00:15:47.144 [2024-04-27 00:49:39.705660] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:47.144 [2024-04-27 00:49:39.705705] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1682706 ] 00:15:47.144 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.144 [2024-04-27 00:49:39.761480] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:47.144 [2024-04-27 00:49:39.834016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.144 [2024-04-27 00:49:39.834109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.144 [2024-04-27 00:49:39.834110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.713 I/O targets: 00:15:47.714 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:47.714 00:15:47.714 00:15:47.714 CUnit - A unit testing framework for C - Version 2.1-3 00:15:47.714 http://cunit.sourceforge.net/ 00:15:47.714 00:15:47.714 00:15:47.714 Suite: bdevio tests on: Nvme1n1 00:15:47.714 Test: blockdev write read block ...passed 00:15:47.714 Test: blockdev write zeroes read block ...passed 00:15:47.714 Test: blockdev write zeroes read no split ...passed 00:15:47.714 Test: blockdev write zeroes read split ...passed 00:15:47.714 Test: blockdev write zeroes read split partial ...passed 00:15:47.714 Test: blockdev reset ...[2024-04-27 00:49:40.307749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:47.714 [2024-04-27 00:49:40.307822] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0b780 (9): Bad file descriptor 00:15:47.714 [2024-04-27 00:49:40.324963] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:47.714 passed 00:15:47.714 Test: blockdev write read 8 blocks ...passed 00:15:47.714 Test: blockdev write read size > 128k ...passed 00:15:47.714 Test: blockdev write read invalid size ...passed 00:15:47.973 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:47.973 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:47.973 Test: blockdev write read max offset ...passed 00:15:47.973 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:47.973 Test: blockdev writev readv 8 blocks ...passed 00:15:47.973 Test: blockdev writev readv 30 x 1block ...passed 00:15:47.973 Test: blockdev writev readv block ...passed 00:15:47.973 Test: blockdev writev readv size > 128k ...passed 00:15:47.973 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:47.973 Test: blockdev comparev and writev ...[2024-04-27 00:49:40.642361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.973 [2024-04-27 00:49:40.642388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:47.973 [2024-04-27 00:49:40.642402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.973 [2024-04-27 00:49:40.642409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:47.973 [2024-04-27 00:49:40.642877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.973 [2024-04-27 00:49:40.642888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:47.973 [2024-04-27 00:49:40.642900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.973 [2024-04-27 00:49:40.642908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:47.973 [2024-04-27 00:49:40.643387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.973 [2024-04-27 00:49:40.643399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:47.973 [2024-04-27 00:49:40.643410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.973 [2024-04-27 00:49:40.643418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:47.973 [2024-04-27 00:49:40.643890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.973 [2024-04-27 00:49:40.643902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:47.973 [2024-04-27 00:49:40.643913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.973 [2024-04-27 00:49:40.643921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:48.271 passed 00:15:48.271 Test: blockdev nvme passthru rw ...passed 00:15:48.271 Test: blockdev nvme passthru vendor specific ...[2024-04-27 00:49:40.729819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.271 [2024-04-27 00:49:40.729834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:48.271 [2024-04-27 00:49:40.730161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.271 [2024-04-27 00:49:40.730176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:48.271 [2024-04-27 00:49:40.730503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.271 [2024-04-27 00:49:40.730513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:48.271 [2024-04-27 00:49:40.730839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:48.271 [2024-04-27 00:49:40.730850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:48.271 passed 00:15:48.271 Test: blockdev nvme admin passthru ...passed 00:15:48.271 Test: blockdev copy ...passed 00:15:48.271 00:15:48.271 Run Summary: Type Total Ran Passed Failed Inactive 00:15:48.271 suites 1 1 n/a 0 0 00:15:48.272 tests 23 23 23 0 0 00:15:48.272 asserts 152 152 152 0 n/a 00:15:48.272 00:15:48.272 Elapsed time = 1.348 seconds 00:15:48.560 00:49:40 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:48.560 00:49:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.560 00:49:40 -- common/autotest_common.sh@10 -- # set +x 00:15:48.560 00:49:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.560 00:49:40 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:48.560 00:49:40 -- target/bdevio.sh@30 -- # nvmftestfini 00:15:48.560 00:49:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:48.560 00:49:40 -- nvmf/common.sh@117 -- # sync 00:15:48.560 00:49:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.560 00:49:40 -- nvmf/common.sh@120 -- # set +e 00:15:48.560 00:49:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.560 00:49:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.560 rmmod nvme_tcp 00:15:48.560 rmmod nvme_fabrics 00:15:48.560 rmmod nvme_keyring 00:15:48.560 00:49:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.560 00:49:41 -- nvmf/common.sh@124 -- # set -e 00:15:48.560 00:49:41 -- nvmf/common.sh@125 -- # return 0 00:15:48.560 00:49:41 -- nvmf/common.sh@478 -- # '[' -n 1682554 ']' 00:15:48.560 00:49:41 -- nvmf/common.sh@479 -- # killprocess 1682554 00:15:48.560 00:49:41 -- common/autotest_common.sh@936 -- # '[' -z 1682554 ']' 00:15:48.560 00:49:41 -- common/autotest_common.sh@940 -- # kill -0 1682554 00:15:48.560 00:49:41 -- common/autotest_common.sh@941 -- # uname 00:15:48.560 00:49:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.560 00:49:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1682554 00:15:48.560 00:49:41 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:15:48.560 00:49:41 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:15:48.560 00:49:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1682554' 00:15:48.560 killing process with pid 1682554 00:15:48.560 00:49:41 -- common/autotest_common.sh@955 -- # kill 1682554 00:15:48.560 00:49:41 -- common/autotest_common.sh@960 -- # wait 1682554 00:15:48.820 00:49:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:48.820 00:49:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:48.820 00:49:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:48.820 00:49:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.820 00:49:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.820 00:49:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.820 00:49:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.820 00:49:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.726 00:49:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.726 00:15:50.726 real 0m9.879s 00:15:50.726 user 0m13.646s 00:15:50.726 sys 0m4.400s 00:15:50.726 00:49:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:50.726 00:49:43 -- common/autotest_common.sh@10 -- # set +x 00:15:50.726 ************************************ 00:15:50.726 END TEST nvmf_bdevio 00:15:50.726 ************************************ 00:15:50.985 00:49:43 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:15:50.985 00:49:43 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:50.985 00:49:43 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:50.985 00:49:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.985 00:49:43 -- common/autotest_common.sh@10 -- # set +x 00:15:50.985 ************************************ 00:15:50.985 START TEST nvmf_bdevio_no_huge 00:15:50.985 ************************************ 00:15:50.985 00:49:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:50.985 * Looking for test storage... 00:15:50.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.985 00:49:43 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.985 00:49:43 -- nvmf/common.sh@7 -- # uname -s 00:15:50.985 00:49:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.985 00:49:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.985 00:49:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.985 00:49:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.985 00:49:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.985 00:49:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.985 00:49:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.985 00:49:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.985 00:49:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.985 00:49:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.985 00:49:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.985 00:49:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.985 00:49:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.985 00:49:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.985 00:49:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.985 00:49:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.985 00:49:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.985 00:49:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.986 00:49:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.986 00:49:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.986 00:49:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.986 00:49:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.986 00:49:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.986 00:49:43 -- paths/export.sh@5 -- # export PATH 00:15:50.986 00:49:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.986 00:49:43 -- nvmf/common.sh@47 -- # : 0 00:15:50.986 00:49:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.986 00:49:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.986 00:49:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.986 00:49:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.986 00:49:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.986 00:49:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.986 00:49:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.986 00:49:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.986 00:49:43 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.986 00:49:43 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.986 00:49:43 -- target/bdevio.sh@14 -- # nvmftestinit 00:15:50.986 00:49:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:50.986 00:49:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.986 00:49:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:50.986 00:49:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:50.986 00:49:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:50.986 00:49:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.986 00:49:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.986 00:49:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.245 00:49:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:51.245 00:49:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:51.245 00:49:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:51.245 00:49:43 -- common/autotest_common.sh@10 -- # set +x 00:15:56.516 00:49:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:56.516 00:49:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:56.516 00:49:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:56.516 00:49:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:56.516 00:49:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:56.516 00:49:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:56.516 00:49:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:56.516 00:49:48 -- nvmf/common.sh@295 -- # net_devs=() 00:15:56.516 00:49:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:56.516 00:49:48 -- nvmf/common.sh@296 -- # e810=() 00:15:56.516 00:49:48 -- nvmf/common.sh@296 -- # local -ga e810 00:15:56.516 00:49:48 -- nvmf/common.sh@297 -- # x722=() 00:15:56.516 00:49:48 -- nvmf/common.sh@297 -- # local -ga x722 00:15:56.516 00:49:48 -- nvmf/common.sh@298 -- # mlx=() 00:15:56.516 00:49:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:56.516 00:49:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:56.516 00:49:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:56.516 00:49:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:56.516 00:49:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:56.516 00:49:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.516 00:49:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:56.516 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:56.516 00:49:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:56.516 00:49:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:56.516 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:56.516 00:49:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:56.516 00:49:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.516 00:49:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.516 00:49:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:56.516 00:49:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.516 00:49:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:56.516 Found net devices under 0000:86:00.0: cvl_0_0 00:15:56.516 00:49:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.516 00:49:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:56.516 00:49:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:56.516 00:49:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:56.516 00:49:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:56.516 00:49:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:56.516 Found net devices under 0000:86:00.1: cvl_0_1 00:15:56.516 00:49:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:56.516 00:49:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:56.516 00:49:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:56.516 00:49:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:56.516 00:49:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:56.516 00:49:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.516 00:49:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.516 00:49:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:56.516 00:49:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:56.516 00:49:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:56.516 00:49:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:56.516 00:49:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:56.516 00:49:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:56.516 00:49:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.516 00:49:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:56.516 00:49:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:56.516 00:49:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:56.516 00:49:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.516 00:49:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.516 00:49:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.516 00:49:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:56.516 00:49:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.516 00:49:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.516 00:49:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.516 00:49:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:56.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:15:56.516 00:15:56.516 --- 10.0.0.2 ping statistics --- 00:15:56.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.516 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:15:56.516 00:49:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:15:56.516 00:15:56.516 --- 10.0.0.1 ping statistics --- 00:15:56.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.516 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:15:56.516 00:49:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.516 00:49:49 -- nvmf/common.sh@411 -- # return 0 00:15:56.516 00:49:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:56.516 00:49:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.516 00:49:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:56.516 00:49:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:56.516 00:49:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.516 00:49:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:56.516 00:49:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:56.516 00:49:49 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:56.516 00:49:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:56.516 00:49:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:56.516 00:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:56.516 00:49:49 -- nvmf/common.sh@470 -- # nvmfpid=1686349 00:15:56.516 00:49:49 -- nvmf/common.sh@471 -- # waitforlisten 1686349 00:15:56.516 00:49:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:56.516 00:49:49 -- common/autotest_common.sh@817 -- # '[' -z 1686349 ']' 00:15:56.516 00:49:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.516 00:49:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:56.516 00:49:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.516 00:49:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:56.516 00:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:56.516 [2024-04-27 00:49:49.086916] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:56.516 [2024-04-27 00:49:49.086962] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:56.516 [2024-04-27 00:49:49.150053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.774 [2024-04-27 00:49:49.232806] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.774 [2024-04-27 00:49:49.232841] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.774 [2024-04-27 00:49:49.232849] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.774 [2024-04-27 00:49:49.232855] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.774 [2024-04-27 00:49:49.232861] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.774 [2024-04-27 00:49:49.232969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:56.774 [2024-04-27 00:49:49.233093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:56.774 [2024-04-27 00:49:49.233199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.774 [2024-04-27 00:49:49.233201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:57.342 00:49:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:57.342 00:49:49 -- common/autotest_common.sh@850 -- # return 0 00:15:57.342 00:49:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:57.342 00:49:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:57.342 00:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:57.342 00:49:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.342 00:49:49 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.342 00:49:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.342 00:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:57.342 [2024-04-27 00:49:49.928379] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.342 00:49:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.342 00:49:49 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:57.342 00:49:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.342 00:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:57.342 Malloc0 00:15:57.342 00:49:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.342 00:49:49 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:57.342 00:49:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.342 00:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:57.342 00:49:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.342 00:49:49 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:57.342 00:49:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.342 00:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:57.342 00:49:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.342 00:49:49 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.342 00:49:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.342 00:49:49 -- common/autotest_common.sh@10 -- # set +x 00:15:57.342 [2024-04-27 00:49:49.968624] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.342 00:49:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.342 00:49:49 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:57.342 00:49:49 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:57.342 00:49:49 -- nvmf/common.sh@521 -- # config=() 00:15:57.343 00:49:49 -- nvmf/common.sh@521 -- # local subsystem config 00:15:57.343 00:49:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:57.343 00:49:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:57.343 { 00:15:57.343 "params": { 00:15:57.343 "name": "Nvme$subsystem", 00:15:57.343 "trtype": "$TEST_TRANSPORT", 00:15:57.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:57.343 "adrfam": "ipv4", 00:15:57.343 "trsvcid": "$NVMF_PORT", 00:15:57.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:57.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:57.343 "hdgst": ${hdgst:-false}, 00:15:57.343 "ddgst": ${ddgst:-false} 00:15:57.343 }, 00:15:57.343 "method": "bdev_nvme_attach_controller" 00:15:57.343 } 00:15:57.343 EOF 00:15:57.343 )") 00:15:57.343 00:49:49 -- nvmf/common.sh@543 -- # cat 00:15:57.343 00:49:49 -- nvmf/common.sh@545 -- # jq . 00:15:57.343 00:49:49 -- nvmf/common.sh@546 -- # IFS=, 00:15:57.343 00:49:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:57.343 "params": { 00:15:57.343 "name": "Nvme1", 00:15:57.343 "trtype": "tcp", 00:15:57.343 "traddr": "10.0.0.2", 00:15:57.343 "adrfam": "ipv4", 00:15:57.343 "trsvcid": "4420", 00:15:57.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:57.343 "hdgst": false, 00:15:57.343 "ddgst": false 00:15:57.343 }, 00:15:57.343 "method": "bdev_nvme_attach_controller" 00:15:57.343 }' 00:15:57.343 [2024-04-27 00:49:50.016273] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:57.343 [2024-04-27 00:49:50.016318] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1686597 ] 00:15:57.601 [2024-04-27 00:49:50.074508] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.601 [2024-04-27 00:49:50.159781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.601 [2024-04-27 00:49:50.159879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.601 [2024-04-27 00:49:50.159879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.859 I/O targets: 00:15:57.859 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:57.859 00:15:57.859 00:15:57.859 CUnit - A unit testing framework for C - Version 2.1-3 00:15:57.859 http://cunit.sourceforge.net/ 00:15:57.859 00:15:57.859 00:15:57.859 Suite: bdevio tests on: Nvme1n1 00:15:57.859 Test: blockdev write read block ...passed 00:15:57.859 Test: blockdev write zeroes read block ...passed 00:15:57.859 Test: blockdev write zeroes read no split ...passed 00:15:57.859 Test: blockdev write zeroes read split ...passed 00:15:57.859 Test: blockdev write zeroes read split partial ...passed 00:15:57.859 Test: blockdev reset ...[2024-04-27 00:49:50.540451] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:57.859 [2024-04-27 00:49:50.540515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcdf870 (9): Bad file descriptor 00:15:58.118 [2024-04-27 00:49:50.611224] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:58.118 passed 00:15:58.118 Test: blockdev write read 8 blocks ...passed 00:15:58.118 Test: blockdev write read size > 128k ...passed 00:15:58.118 Test: blockdev write read invalid size ...passed 00:15:58.118 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:58.118 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:58.118 Test: blockdev write read max offset ...passed 00:15:58.118 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:58.118 Test: blockdev writev readv 8 blocks ...passed 00:15:58.118 Test: blockdev writev readv 30 x 1block ...passed 00:15:58.118 Test: blockdev writev readv block ...passed 00:15:58.118 Test: blockdev writev readv size > 128k ...passed 00:15:58.118 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:58.118 Test: blockdev comparev and writev ...[2024-04-27 00:49:50.788516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.118 [2024-04-27 00:49:50.788546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:58.118 [2024-04-27 00:49:50.788560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.118 [2024-04-27 00:49:50.788568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:58.118 [2024-04-27 00:49:50.788933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.118 [2024-04-27 00:49:50.788945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:58.118 [2024-04-27 00:49:50.788957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.118 [2024-04-27 00:49:50.788965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:58.118 [2024-04-27 00:49:50.789340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.118 [2024-04-27 00:49:50.789352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:58.118 [2024-04-27 00:49:50.789363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.118 [2024-04-27 00:49:50.789372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:58.118 [2024-04-27 00:49:50.789754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.118 [2024-04-27 00:49:50.789765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:58.118 [2024-04-27 00:49:50.789777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:58.118 [2024-04-27 00:49:50.789785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:58.377 passed 00:15:58.377 Test: blockdev nvme passthru rw ...passed 00:15:58.377 Test: blockdev nvme passthru vendor specific ...[2024-04-27 00:49:50.871671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.377 [2024-04-27 00:49:50.871688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:58.377 [2024-04-27 00:49:50.871916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.377 [2024-04-27 00:49:50.871926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:58.377 [2024-04-27 00:49:50.872146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.377 [2024-04-27 00:49:50.872157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:58.377 [2024-04-27 00:49:50.872388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:58.377 [2024-04-27 00:49:50.872399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:58.377 passed 00:15:58.377 Test: blockdev nvme admin passthru ...passed 00:15:58.377 Test: blockdev copy ...passed 00:15:58.377 00:15:58.377 Run Summary: Type Total Ran Passed Failed Inactive 00:15:58.377 suites 1 1 n/a 0 0 00:15:58.377 tests 23 23 23 0 0 00:15:58.377 asserts 152 152 152 0 n/a 00:15:58.377 00:15:58.377 Elapsed time = 1.182 seconds 00:15:58.636 00:49:51 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.636 00:49:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.636 00:49:51 -- common/autotest_common.sh@10 -- # set +x 00:15:58.636 00:49:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.636 00:49:51 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:58.636 00:49:51 -- target/bdevio.sh@30 -- # nvmftestfini 00:15:58.636 00:49:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:58.636 00:49:51 -- nvmf/common.sh@117 -- # sync 00:15:58.636 00:49:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.636 00:49:51 -- nvmf/common.sh@120 -- # set +e 00:15:58.636 00:49:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.636 00:49:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.636 rmmod nvme_tcp 00:15:58.636 rmmod nvme_fabrics 00:15:58.636 rmmod nvme_keyring 00:15:58.636 00:49:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.636 00:49:51 -- nvmf/common.sh@124 -- # set -e 00:15:58.636 00:49:51 -- nvmf/common.sh@125 -- # return 0 00:15:58.636 00:49:51 -- nvmf/common.sh@478 -- # '[' -n 1686349 ']' 00:15:58.636 00:49:51 -- nvmf/common.sh@479 -- # killprocess 1686349 00:15:58.636 00:49:51 -- common/autotest_common.sh@936 -- # '[' -z 1686349 ']' 00:15:58.636 00:49:51 -- common/autotest_common.sh@940 -- # kill -0 1686349 00:15:58.636 00:49:51 -- common/autotest_common.sh@941 -- # uname 00:15:58.636 00:49:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:58.636 00:49:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1686349 00:15:58.895 00:49:51 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:15:58.895 00:49:51 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:15:58.895 00:49:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1686349' 00:15:58.895 killing process with pid 1686349 00:15:58.895 00:49:51 -- common/autotest_common.sh@955 -- # kill 1686349 00:15:58.895 00:49:51 -- common/autotest_common.sh@960 -- # wait 1686349 00:15:59.155 00:49:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:59.155 00:49:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:59.155 00:49:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:59.155 00:49:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:59.155 00:49:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:59.155 00:49:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.155 00:49:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.155 00:49:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.059 00:49:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.060 00:16:01.060 real 0m10.179s 00:16:01.060 user 0m13.201s 00:16:01.060 sys 0m4.837s 00:16:01.060 00:49:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:01.060 00:49:53 -- common/autotest_common.sh@10 -- # set +x 00:16:01.060 ************************************ 00:16:01.060 END TEST nvmf_bdevio_no_huge 00:16:01.060 ************************************ 00:16:01.319 00:49:53 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:01.319 00:49:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:01.319 00:49:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:01.319 00:49:53 -- common/autotest_common.sh@10 -- # set +x 00:16:01.319 ************************************ 00:16:01.319 START TEST nvmf_tls 00:16:01.319 ************************************ 00:16:01.319 00:49:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:01.319 * Looking for test storage... 00:16:01.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.319 00:49:53 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.319 00:49:53 -- nvmf/common.sh@7 -- # uname -s 00:16:01.319 00:49:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.319 00:49:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.319 00:49:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.319 00:49:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.319 00:49:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.319 00:49:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.319 00:49:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.319 00:49:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.319 00:49:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.319 00:49:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.319 00:49:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.320 00:49:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.320 00:49:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.320 00:49:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.320 00:49:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.320 00:49:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.320 00:49:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.320 00:49:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.320 00:49:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.320 00:49:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.320 00:49:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.320 00:49:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.320 00:49:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.320 00:49:53 -- paths/export.sh@5 -- # export PATH 00:16:01.320 00:49:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.320 00:49:53 -- nvmf/common.sh@47 -- # : 0 00:16:01.320 00:49:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.320 00:49:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.320 00:49:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.320 00:49:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.320 00:49:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.320 00:49:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.320 00:49:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.320 00:49:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.320 00:49:53 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:01.320 00:49:53 -- target/tls.sh@62 -- # nvmftestinit 00:16:01.320 00:49:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:01.320 00:49:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.320 00:49:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:01.320 00:49:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:01.320 00:49:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:01.320 00:49:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.320 00:49:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.320 00:49:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.320 00:49:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:01.320 00:49:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:01.320 00:49:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.320 00:49:53 -- common/autotest_common.sh@10 -- # set +x 00:16:06.591 00:49:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:06.591 00:49:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:06.591 00:49:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:06.591 00:49:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:06.591 00:49:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:06.591 00:49:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:06.591 00:49:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:06.591 00:49:58 -- nvmf/common.sh@295 -- # net_devs=() 00:16:06.591 00:49:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:06.591 00:49:58 -- nvmf/common.sh@296 -- # e810=() 00:16:06.591 00:49:58 -- nvmf/common.sh@296 -- # local -ga e810 00:16:06.591 00:49:58 -- nvmf/common.sh@297 -- # x722=() 00:16:06.591 00:49:58 -- nvmf/common.sh@297 -- # local -ga x722 00:16:06.591 00:49:58 -- nvmf/common.sh@298 -- # mlx=() 00:16:06.591 00:49:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:06.591 00:49:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:06.591 00:49:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:06.591 00:49:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:06.591 00:49:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:06.591 00:49:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.591 00:49:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:06.591 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:06.591 00:49:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:06.591 00:49:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:06.591 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:06.591 00:49:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:06.591 00:49:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.591 00:49:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.591 00:49:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:06.591 00:49:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.591 00:49:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:06.591 Found net devices under 0000:86:00.0: cvl_0_0 00:16:06.591 00:49:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.591 00:49:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:06.591 00:49:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:06.591 00:49:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:06.591 00:49:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:06.591 00:49:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:06.591 Found net devices under 0000:86:00.1: cvl_0_1 00:16:06.591 00:49:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:06.591 00:49:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:06.591 00:49:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:06.591 00:49:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:06.591 00:49:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.591 00:49:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.591 00:49:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:06.591 00:49:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:06.591 00:49:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:06.591 00:49:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:06.591 00:49:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:06.591 00:49:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:06.591 00:49:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.591 00:49:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:06.591 00:49:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:06.591 00:49:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:06.591 00:49:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:06.591 00:49:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:06.591 00:49:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:06.591 00:49:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:06.591 00:49:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:06.591 00:49:58 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:06.591 00:49:58 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:06.591 00:49:58 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:06.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:16:06.591 00:16:06.591 --- 10.0.0.2 ping statistics --- 00:16:06.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.591 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:06.591 00:49:58 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:06.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:16:06.591 00:16:06.591 --- 10.0.0.1 ping statistics --- 00:16:06.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.591 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:16:06.591 00:49:58 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.591 00:49:58 -- nvmf/common.sh@411 -- # return 0 00:16:06.591 00:49:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:06.591 00:49:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.591 00:49:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:06.591 00:49:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.591 00:49:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:06.591 00:49:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:06.591 00:49:58 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:06.592 00:49:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:06.592 00:49:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:06.592 00:49:58 -- common/autotest_common.sh@10 -- # set +x 00:16:06.592 00:49:58 -- nvmf/common.sh@470 -- # nvmfpid=1690135 00:16:06.592 00:49:58 -- nvmf/common.sh@471 -- # waitforlisten 1690135 00:16:06.592 00:49:58 -- common/autotest_common.sh@817 -- # '[' -z 1690135 ']' 00:16:06.592 00:49:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.592 00:49:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:06.592 00:49:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.592 00:49:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:06.592 00:49:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:06.592 00:49:58 -- common/autotest_common.sh@10 -- # set +x 00:16:06.592 [2024-04-27 00:49:58.895211] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:06.592 [2024-04-27 00:49:58.895255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.592 EAL: No free 2048 kB hugepages reported on node 1 00:16:06.592 [2024-04-27 00:49:58.951666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.592 [2024-04-27 00:49:59.028224] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.592 [2024-04-27 00:49:59.028257] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.592 [2024-04-27 00:49:59.028264] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.592 [2024-04-27 00:49:59.028270] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.592 [2024-04-27 00:49:59.028276] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.592 [2024-04-27 00:49:59.028295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.158 00:49:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:07.158 00:49:59 -- common/autotest_common.sh@850 -- # return 0 00:16:07.158 00:49:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:07.158 00:49:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:07.158 00:49:59 -- common/autotest_common.sh@10 -- # set +x 00:16:07.158 00:49:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.158 00:49:59 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:07.158 00:49:59 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:07.417 true 00:16:07.417 00:49:59 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:07.417 00:49:59 -- target/tls.sh@73 -- # jq -r .tls_version 00:16:07.417 00:50:00 -- target/tls.sh@73 -- # version=0 00:16:07.417 00:50:00 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:07.417 00:50:00 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:07.675 00:50:00 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:07.675 00:50:00 -- target/tls.sh@81 -- # jq -r .tls_version 00:16:07.934 00:50:00 -- target/tls.sh@81 -- # version=13 00:16:07.934 00:50:00 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:07.934 00:50:00 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:07.934 00:50:00 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:07.934 00:50:00 -- target/tls.sh@89 -- # jq -r .tls_version 00:16:08.193 00:50:00 -- target/tls.sh@89 -- # version=7 00:16:08.193 00:50:00 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:08.193 00:50:00 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:08.193 00:50:00 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:08.452 00:50:00 -- target/tls.sh@96 -- # ktls=false 00:16:08.452 00:50:00 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:08.452 00:50:00 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:08.452 00:50:01 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:08.452 00:50:01 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:08.711 00:50:01 -- target/tls.sh@104 -- # ktls=true 00:16:08.711 00:50:01 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:08.711 00:50:01 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:08.969 00:50:01 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:08.969 00:50:01 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:08.969 00:50:01 -- target/tls.sh@112 -- # ktls=false 00:16:08.969 00:50:01 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:08.969 00:50:01 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:08.969 00:50:01 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:08.969 00:50:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:08.969 00:50:01 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:08.969 00:50:01 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:16:08.969 00:50:01 -- nvmf/common.sh@693 -- # digest=1 00:16:08.969 00:50:01 -- nvmf/common.sh@694 -- # python - 00:16:09.228 00:50:01 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:09.228 00:50:01 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:09.228 00:50:01 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:09.228 00:50:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:09.228 00:50:01 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:09.228 00:50:01 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:16:09.228 00:50:01 -- nvmf/common.sh@693 -- # digest=1 00:16:09.228 00:50:01 -- nvmf/common.sh@694 -- # python - 00:16:09.228 00:50:01 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:09.228 00:50:01 -- target/tls.sh@121 -- # mktemp 00:16:09.228 00:50:01 -- target/tls.sh@121 -- # key_path=/tmp/tmp.uk9QsBLmqK 00:16:09.228 00:50:01 -- target/tls.sh@122 -- # mktemp 00:16:09.228 00:50:01 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.b3FoyXnB7P 00:16:09.228 00:50:01 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:09.228 00:50:01 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:09.228 00:50:01 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.uk9QsBLmqK 00:16:09.228 00:50:01 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.b3FoyXnB7P 00:16:09.228 00:50:01 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:09.486 00:50:01 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:09.486 00:50:02 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.uk9QsBLmqK 00:16:09.486 00:50:02 -- target/tls.sh@49 -- # local key=/tmp/tmp.uk9QsBLmqK 00:16:09.486 00:50:02 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:09.746 [2024-04-27 00:50:02.318213] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.746 00:50:02 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:10.005 00:50:02 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:10.005 [2024-04-27 00:50:02.667107] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:10.005 [2024-04-27 00:50:02.667301] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.005 00:50:02 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:10.263 malloc0 00:16:10.263 00:50:02 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:10.521 00:50:03 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uk9QsBLmqK 00:16:10.521 [2024-04-27 00:50:03.212680] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:10.779 00:50:03 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.uk9QsBLmqK 00:16:10.779 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.752 Initializing NVMe Controllers 00:16:20.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:20.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:20.752 Initialization complete. Launching workers. 00:16:20.752 ======================================================== 00:16:20.752 Latency(us) 00:16:20.752 Device Information : IOPS MiB/s Average min max 00:16:20.752 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16118.97 62.96 3970.93 863.28 5724.89 00:16:20.752 ======================================================== 00:16:20.752 Total : 16118.97 62.96 3970.93 863.28 5724.89 00:16:20.752 00:16:20.752 00:50:13 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.uk9QsBLmqK 00:16:20.752 00:50:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:20.752 00:50:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:20.752 00:50:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:20.752 00:50:13 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uk9QsBLmqK' 00:16:20.752 00:50:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:20.752 00:50:13 -- target/tls.sh@28 -- # bdevperf_pid=1692692 00:16:20.752 00:50:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:20.752 00:50:13 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:20.752 00:50:13 -- target/tls.sh@31 -- # waitforlisten 1692692 /var/tmp/bdevperf.sock 00:16:20.752 00:50:13 -- common/autotest_common.sh@817 -- # '[' -z 1692692 ']' 00:16:20.752 00:50:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:20.752 00:50:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:20.752 00:50:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:20.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:20.752 00:50:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:20.752 00:50:13 -- common/autotest_common.sh@10 -- # set +x 00:16:20.752 [2024-04-27 00:50:13.388865] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:20.752 [2024-04-27 00:50:13.388912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1692692 ] 00:16:20.752 EAL: No free 2048 kB hugepages reported on node 1 00:16:20.752 [2024-04-27 00:50:13.438584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.010 [2024-04-27 00:50:13.515642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.577 00:50:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:21.577 00:50:14 -- common/autotest_common.sh@850 -- # return 0 00:16:21.577 00:50:14 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uk9QsBLmqK 00:16:21.836 [2024-04-27 00:50:14.358049] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:21.836 [2024-04-27 00:50:14.358118] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:21.836 TLSTESTn1 00:16:21.836 00:50:14 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:22.095 Running I/O for 10 seconds... 00:16:32.070 00:16:32.070 Latency(us) 00:16:32.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.070 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:32.070 Verification LBA range: start 0x0 length 0x2000 00:16:32.070 TLSTESTn1 : 10.08 1593.28 6.22 0.00 0.00 80062.08 5413.84 122181.90 00:16:32.070 =================================================================================================================== 00:16:32.070 Total : 1593.28 6.22 0.00 0.00 80062.08 5413.84 122181.90 00:16:32.070 0 00:16:32.070 00:50:24 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:32.070 00:50:24 -- target/tls.sh@45 -- # killprocess 1692692 00:16:32.070 00:50:24 -- common/autotest_common.sh@936 -- # '[' -z 1692692 ']' 00:16:32.070 00:50:24 -- common/autotest_common.sh@940 -- # kill -0 1692692 00:16:32.070 00:50:24 -- common/autotest_common.sh@941 -- # uname 00:16:32.070 00:50:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:32.070 00:50:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1692692 00:16:32.070 00:50:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:32.070 00:50:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:32.070 00:50:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1692692' 00:16:32.070 killing process with pid 1692692 00:16:32.070 00:50:24 -- common/autotest_common.sh@955 -- # kill 1692692 00:16:32.070 Received shutdown signal, test time was about 10.000000 seconds 00:16:32.070 00:16:32.070 Latency(us) 00:16:32.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.070 =================================================================================================================== 00:16:32.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:32.070 [2024-04-27 00:50:24.728492] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:32.070 00:50:24 -- common/autotest_common.sh@960 -- # wait 1692692 00:16:32.331 00:50:24 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.b3FoyXnB7P 00:16:32.331 00:50:24 -- common/autotest_common.sh@638 -- # local es=0 00:16:32.331 00:50:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.b3FoyXnB7P 00:16:32.331 00:50:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:32.331 00:50:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:32.331 00:50:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:32.331 00:50:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:32.331 00:50:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.b3FoyXnB7P 00:16:32.331 00:50:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:32.331 00:50:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:32.331 00:50:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:32.331 00:50:24 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.b3FoyXnB7P' 00:16:32.331 00:50:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:32.331 00:50:24 -- target/tls.sh@28 -- # bdevperf_pid=1694531 00:16:32.331 00:50:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:32.331 00:50:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:32.331 00:50:24 -- target/tls.sh@31 -- # waitforlisten 1694531 /var/tmp/bdevperf.sock 00:16:32.331 00:50:24 -- common/autotest_common.sh@817 -- # '[' -z 1694531 ']' 00:16:32.331 00:50:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.331 00:50:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:32.331 00:50:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.331 00:50:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:32.331 00:50:24 -- common/autotest_common.sh@10 -- # set +x 00:16:32.331 [2024-04-27 00:50:24.984916] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:32.332 [2024-04-27 00:50:24.984963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1694531 ] 00:16:32.332 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.628 [2024-04-27 00:50:25.035544] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.628 [2024-04-27 00:50:25.103859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.206 00:50:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:33.206 00:50:25 -- common/autotest_common.sh@850 -- # return 0 00:16:33.206 00:50:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.b3FoyXnB7P 00:16:33.465 [2024-04-27 00:50:25.942256] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:33.465 [2024-04-27 00:50:25.942343] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:33.465 [2024-04-27 00:50:25.951846] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:33.465 [2024-04-27 00:50:25.952828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22600 (107): Transport endpoint is not connected 00:16:33.465 [2024-04-27 00:50:25.953820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d22600 (9): Bad file descriptor 00:16:33.465 [2024-04-27 00:50:25.954821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:33.466 [2024-04-27 00:50:25.954834] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:33.466 [2024-04-27 00:50:25.954843] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:33.466 request: 00:16:33.466 { 00:16:33.466 "name": "TLSTEST", 00:16:33.466 "trtype": "tcp", 00:16:33.466 "traddr": "10.0.0.2", 00:16:33.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:33.466 "adrfam": "ipv4", 00:16:33.466 "trsvcid": "4420", 00:16:33.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.466 "psk": "/tmp/tmp.b3FoyXnB7P", 00:16:33.466 "method": "bdev_nvme_attach_controller", 00:16:33.466 "req_id": 1 00:16:33.466 } 00:16:33.466 Got JSON-RPC error response 00:16:33.466 response: 00:16:33.466 { 00:16:33.466 "code": -32602, 00:16:33.466 "message": "Invalid parameters" 00:16:33.466 } 00:16:33.466 00:50:25 -- target/tls.sh@36 -- # killprocess 1694531 00:16:33.466 00:50:25 -- common/autotest_common.sh@936 -- # '[' -z 1694531 ']' 00:16:33.466 00:50:25 -- common/autotest_common.sh@940 -- # kill -0 1694531 00:16:33.466 00:50:25 -- common/autotest_common.sh@941 -- # uname 00:16:33.466 00:50:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:33.466 00:50:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1694531 00:16:33.466 00:50:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:33.466 00:50:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:33.466 00:50:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1694531' 00:16:33.466 killing process with pid 1694531 00:16:33.466 00:50:26 -- common/autotest_common.sh@955 -- # kill 1694531 00:16:33.466 Received shutdown signal, test time was about 10.000000 seconds 00:16:33.466 00:16:33.466 Latency(us) 00:16:33.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.466 =================================================================================================================== 00:16:33.466 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:33.466 [2024-04-27 00:50:26.022427] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:33.466 00:50:26 -- common/autotest_common.sh@960 -- # wait 1694531 00:16:33.725 00:50:26 -- target/tls.sh@37 -- # return 1 00:16:33.725 00:50:26 -- common/autotest_common.sh@641 -- # es=1 00:16:33.725 00:50:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:33.725 00:50:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:33.725 00:50:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:33.725 00:50:26 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uk9QsBLmqK 00:16:33.725 00:50:26 -- common/autotest_common.sh@638 -- # local es=0 00:16:33.725 00:50:26 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uk9QsBLmqK 00:16:33.725 00:50:26 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:33.725 00:50:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:33.725 00:50:26 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:33.725 00:50:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:33.725 00:50:26 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.uk9QsBLmqK 00:16:33.725 00:50:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:33.725 00:50:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:33.725 00:50:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:33.725 00:50:26 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uk9QsBLmqK' 00:16:33.725 00:50:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:33.725 00:50:26 -- target/tls.sh@28 -- # bdevperf_pid=1694779 00:16:33.725 00:50:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:33.725 00:50:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:33.725 00:50:26 -- target/tls.sh@31 -- # waitforlisten 1694779 /var/tmp/bdevperf.sock 00:16:33.725 00:50:26 -- common/autotest_common.sh@817 -- # '[' -z 1694779 ']' 00:16:33.725 00:50:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:33.725 00:50:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:33.725 00:50:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:33.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:33.725 00:50:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:33.725 00:50:26 -- common/autotest_common.sh@10 -- # set +x 00:16:33.725 [2024-04-27 00:50:26.269169] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:33.725 [2024-04-27 00:50:26.269215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1694779 ] 00:16:33.725 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.725 [2024-04-27 00:50:26.318560] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.725 [2024-04-27 00:50:26.385087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.661 00:50:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:34.661 00:50:27 -- common/autotest_common.sh@850 -- # return 0 00:16:34.661 00:50:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.uk9QsBLmqK 00:16:34.661 [2024-04-27 00:50:27.235400] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:34.661 [2024-04-27 00:50:27.235479] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:34.661 [2024-04-27 00:50:27.245062] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:34.661 [2024-04-27 00:50:27.245090] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:34.661 [2024-04-27 00:50:27.245114] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:34.661 [2024-04-27 00:50:27.245950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1736600 (107): Transport endpoint is not connected 00:16:34.661 [2024-04-27 00:50:27.246943] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1736600 (9): Bad file descriptor 00:16:34.661 [2024-04-27 00:50:27.247944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:34.661 [2024-04-27 00:50:27.247955] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:34.661 [2024-04-27 00:50:27.247961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:34.661 request: 00:16:34.661 { 00:16:34.661 "name": "TLSTEST", 00:16:34.661 "trtype": "tcp", 00:16:34.661 "traddr": "10.0.0.2", 00:16:34.661 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:34.661 "adrfam": "ipv4", 00:16:34.661 "trsvcid": "4420", 00:16:34.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.661 "psk": "/tmp/tmp.uk9QsBLmqK", 00:16:34.661 "method": "bdev_nvme_attach_controller", 00:16:34.661 "req_id": 1 00:16:34.661 } 00:16:34.661 Got JSON-RPC error response 00:16:34.661 response: 00:16:34.661 { 00:16:34.661 "code": -32602, 00:16:34.661 "message": "Invalid parameters" 00:16:34.661 } 00:16:34.661 00:50:27 -- target/tls.sh@36 -- # killprocess 1694779 00:16:34.661 00:50:27 -- common/autotest_common.sh@936 -- # '[' -z 1694779 ']' 00:16:34.661 00:50:27 -- common/autotest_common.sh@940 -- # kill -0 1694779 00:16:34.661 00:50:27 -- common/autotest_common.sh@941 -- # uname 00:16:34.661 00:50:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.661 00:50:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1694779 00:16:34.661 00:50:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:34.661 00:50:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:34.661 00:50:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1694779' 00:16:34.661 killing process with pid 1694779 00:16:34.661 00:50:27 -- common/autotest_common.sh@955 -- # kill 1694779 00:16:34.661 Received shutdown signal, test time was about 10.000000 seconds 00:16:34.661 00:16:34.661 Latency(us) 00:16:34.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.661 =================================================================================================================== 00:16:34.661 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:34.661 [2024-04-27 00:50:27.315310] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:34.661 00:50:27 -- common/autotest_common.sh@960 -- # wait 1694779 00:16:34.920 00:50:27 -- target/tls.sh@37 -- # return 1 00:16:34.920 00:50:27 -- common/autotest_common.sh@641 -- # es=1 00:16:34.920 00:50:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:34.920 00:50:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:34.920 00:50:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:34.920 00:50:27 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uk9QsBLmqK 00:16:34.920 00:50:27 -- common/autotest_common.sh@638 -- # local es=0 00:16:34.920 00:50:27 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uk9QsBLmqK 00:16:34.920 00:50:27 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:34.920 00:50:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:34.920 00:50:27 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:34.920 00:50:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:34.920 00:50:27 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.uk9QsBLmqK 00:16:34.920 00:50:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:34.920 00:50:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:34.920 00:50:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:34.920 00:50:27 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.uk9QsBLmqK' 00:16:34.920 00:50:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.920 00:50:27 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:34.920 00:50:27 -- target/tls.sh@28 -- # bdevperf_pid=1695016 00:16:34.920 00:50:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:34.920 00:50:27 -- target/tls.sh@31 -- # waitforlisten 1695016 /var/tmp/bdevperf.sock 00:16:34.920 00:50:27 -- common/autotest_common.sh@817 -- # '[' -z 1695016 ']' 00:16:34.920 00:50:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.920 00:50:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:34.920 00:50:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.920 00:50:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:34.920 00:50:27 -- common/autotest_common.sh@10 -- # set +x 00:16:34.920 [2024-04-27 00:50:27.542664] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:34.920 [2024-04-27 00:50:27.542715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1695016 ] 00:16:34.920 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.920 [2024-04-27 00:50:27.593138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.178 [2024-04-27 00:50:27.671247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.178 00:50:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:35.178 00:50:27 -- common/autotest_common.sh@850 -- # return 0 00:16:35.178 00:50:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.uk9QsBLmqK 00:16:35.437 [2024-04-27 00:50:27.891290] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:35.437 [2024-04-27 00:50:27.891370] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:35.437 [2024-04-27 00:50:27.896714] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:35.437 [2024-04-27 00:50:27.896734] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:35.437 [2024-04-27 00:50:27.896757] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:35.437 [2024-04-27 00:50:27.897892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446600 (107): Transport endpoint is not connected 00:16:35.437 [2024-04-27 00:50:27.898885] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2446600 (9): Bad file descriptor 00:16:35.437 [2024-04-27 00:50:27.899886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:35.437 [2024-04-27 00:50:27.899900] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:35.437 [2024-04-27 00:50:27.899907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:35.437 request: 00:16:35.437 { 00:16:35.437 "name": "TLSTEST", 00:16:35.437 "trtype": "tcp", 00:16:35.437 "traddr": "10.0.0.2", 00:16:35.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:35.437 "adrfam": "ipv4", 00:16:35.437 "trsvcid": "4420", 00:16:35.437 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:35.437 "psk": "/tmp/tmp.uk9QsBLmqK", 00:16:35.437 "method": "bdev_nvme_attach_controller", 00:16:35.437 "req_id": 1 00:16:35.437 } 00:16:35.437 Got JSON-RPC error response 00:16:35.437 response: 00:16:35.437 { 00:16:35.437 "code": -32602, 00:16:35.437 "message": "Invalid parameters" 00:16:35.437 } 00:16:35.437 00:50:27 -- target/tls.sh@36 -- # killprocess 1695016 00:16:35.437 00:50:27 -- common/autotest_common.sh@936 -- # '[' -z 1695016 ']' 00:16:35.437 00:50:27 -- common/autotest_common.sh@940 -- # kill -0 1695016 00:16:35.437 00:50:27 -- common/autotest_common.sh@941 -- # uname 00:16:35.437 00:50:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.437 00:50:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1695016 00:16:35.437 00:50:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:35.437 00:50:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:35.437 00:50:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1695016' 00:16:35.437 killing process with pid 1695016 00:16:35.437 00:50:27 -- common/autotest_common.sh@955 -- # kill 1695016 00:16:35.437 Received shutdown signal, test time was about 10.000000 seconds 00:16:35.437 00:16:35.437 Latency(us) 00:16:35.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.437 =================================================================================================================== 00:16:35.437 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:35.438 [2024-04-27 00:50:27.963540] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:35.438 00:50:27 -- common/autotest_common.sh@960 -- # wait 1695016 00:16:35.697 00:50:28 -- target/tls.sh@37 -- # return 1 00:16:35.697 00:50:28 -- common/autotest_common.sh@641 -- # es=1 00:16:35.697 00:50:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:35.697 00:50:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:35.697 00:50:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:35.697 00:50:28 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:35.697 00:50:28 -- common/autotest_common.sh@638 -- # local es=0 00:16:35.697 00:50:28 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:35.697 00:50:28 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:35.697 00:50:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:35.697 00:50:28 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:35.697 00:50:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:35.697 00:50:28 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:35.697 00:50:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:35.697 00:50:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:35.697 00:50:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:35.697 00:50:28 -- target/tls.sh@23 -- # psk= 00:16:35.697 00:50:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.697 00:50:28 -- target/tls.sh@28 -- # bdevperf_pid=1695033 00:16:35.697 00:50:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:35.697 00:50:28 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:35.697 00:50:28 -- target/tls.sh@31 -- # waitforlisten 1695033 /var/tmp/bdevperf.sock 00:16:35.697 00:50:28 -- common/autotest_common.sh@817 -- # '[' -z 1695033 ']' 00:16:35.697 00:50:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.697 00:50:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:35.697 00:50:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.697 00:50:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:35.697 00:50:28 -- common/autotest_common.sh@10 -- # set +x 00:16:35.697 [2024-04-27 00:50:28.209730] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:35.697 [2024-04-27 00:50:28.209776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1695033 ] 00:16:35.697 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.697 [2024-04-27 00:50:28.260604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.697 [2024-04-27 00:50:28.325789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.633 00:50:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:36.633 00:50:28 -- common/autotest_common.sh@850 -- # return 0 00:16:36.633 00:50:28 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:36.633 [2024-04-27 00:50:29.139936] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:36.633 [2024-04-27 00:50:29.141505] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1cc60 (9): Bad file descriptor 00:16:36.633 [2024-04-27 00:50:29.142504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:36.633 [2024-04-27 00:50:29.142515] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:36.633 [2024-04-27 00:50:29.142523] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:36.633 request: 00:16:36.633 { 00:16:36.633 "name": "TLSTEST", 00:16:36.633 "trtype": "tcp", 00:16:36.633 "traddr": "10.0.0.2", 00:16:36.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:36.633 "adrfam": "ipv4", 00:16:36.633 "trsvcid": "4420", 00:16:36.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:36.633 "method": "bdev_nvme_attach_controller", 00:16:36.633 "req_id": 1 00:16:36.633 } 00:16:36.633 Got JSON-RPC error response 00:16:36.633 response: 00:16:36.633 { 00:16:36.633 "code": -32602, 00:16:36.633 "message": "Invalid parameters" 00:16:36.633 } 00:16:36.633 00:50:29 -- target/tls.sh@36 -- # killprocess 1695033 00:16:36.633 00:50:29 -- common/autotest_common.sh@936 -- # '[' -z 1695033 ']' 00:16:36.633 00:50:29 -- common/autotest_common.sh@940 -- # kill -0 1695033 00:16:36.633 00:50:29 -- common/autotest_common.sh@941 -- # uname 00:16:36.633 00:50:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.633 00:50:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1695033 00:16:36.633 00:50:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:36.633 00:50:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:36.633 00:50:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1695033' 00:16:36.633 killing process with pid 1695033 00:16:36.633 00:50:29 -- common/autotest_common.sh@955 -- # kill 1695033 00:16:36.633 Received shutdown signal, test time was about 10.000000 seconds 00:16:36.633 00:16:36.633 Latency(us) 00:16:36.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.633 =================================================================================================================== 00:16:36.633 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:36.633 00:50:29 -- common/autotest_common.sh@960 -- # wait 1695033 00:16:36.892 00:50:29 -- target/tls.sh@37 -- # return 1 00:16:36.892 00:50:29 -- common/autotest_common.sh@641 -- # es=1 00:16:36.892 00:50:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:36.892 00:50:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:36.892 00:50:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:36.892 00:50:29 -- target/tls.sh@158 -- # killprocess 1690135 00:16:36.892 00:50:29 -- common/autotest_common.sh@936 -- # '[' -z 1690135 ']' 00:16:36.892 00:50:29 -- common/autotest_common.sh@940 -- # kill -0 1690135 00:16:36.892 00:50:29 -- common/autotest_common.sh@941 -- # uname 00:16:36.892 00:50:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.892 00:50:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1690135 00:16:36.892 00:50:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:36.892 00:50:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:36.892 00:50:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1690135' 00:16:36.892 killing process with pid 1690135 00:16:36.892 00:50:29 -- common/autotest_common.sh@955 -- # kill 1690135 00:16:36.892 [2024-04-27 00:50:29.449512] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:36.892 00:50:29 -- common/autotest_common.sh@960 -- # wait 1690135 00:16:37.151 00:50:29 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:37.151 00:50:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:37.151 00:50:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:37.151 00:50:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:16:37.151 00:50:29 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:37.151 00:50:29 -- nvmf/common.sh@693 -- # digest=2 00:16:37.151 00:50:29 -- nvmf/common.sh@694 -- # python - 00:16:37.151 00:50:29 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:37.151 00:50:29 -- target/tls.sh@160 -- # mktemp 00:16:37.151 00:50:29 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Y5HQJPkhYG 00:16:37.151 00:50:29 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:37.151 00:50:29 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Y5HQJPkhYG 00:16:37.151 00:50:29 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:16:37.152 00:50:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:37.152 00:50:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:37.152 00:50:29 -- common/autotest_common.sh@10 -- # set +x 00:16:37.152 00:50:29 -- nvmf/common.sh@470 -- # nvmfpid=1695321 00:16:37.152 00:50:29 -- nvmf/common.sh@471 -- # waitforlisten 1695321 00:16:37.152 00:50:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:37.152 00:50:29 -- common/autotest_common.sh@817 -- # '[' -z 1695321 ']' 00:16:37.152 00:50:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.152 00:50:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:37.152 00:50:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.152 00:50:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:37.152 00:50:29 -- common/autotest_common.sh@10 -- # set +x 00:16:37.152 [2024-04-27 00:50:29.759262] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:37.152 [2024-04-27 00:50:29.759308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.152 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.152 [2024-04-27 00:50:29.816667] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.410 [2024-04-27 00:50:29.895029] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.410 [2024-04-27 00:50:29.895063] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.410 [2024-04-27 00:50:29.895077] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.410 [2024-04-27 00:50:29.895083] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.410 [2024-04-27 00:50:29.895089] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.410 [2024-04-27 00:50:29.895109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.978 00:50:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:37.978 00:50:30 -- common/autotest_common.sh@850 -- # return 0 00:16:37.978 00:50:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:37.978 00:50:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:37.978 00:50:30 -- common/autotest_common.sh@10 -- # set +x 00:16:37.978 00:50:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.978 00:50:30 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Y5HQJPkhYG 00:16:37.978 00:50:30 -- target/tls.sh@49 -- # local key=/tmp/tmp.Y5HQJPkhYG 00:16:37.978 00:50:30 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:38.237 [2024-04-27 00:50:30.758493] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.237 00:50:30 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:38.495 00:50:30 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:38.495 [2024-04-27 00:50:31.099377] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:38.495 [2024-04-27 00:50:31.099567] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.496 00:50:31 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:38.754 malloc0 00:16:38.755 00:50:31 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:39.014 00:50:31 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y5HQJPkhYG 00:16:39.014 [2024-04-27 00:50:31.592911] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:39.014 00:50:31 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y5HQJPkhYG 00:16:39.014 00:50:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:39.014 00:50:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:39.014 00:50:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:39.014 00:50:31 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Y5HQJPkhYG' 00:16:39.014 00:50:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:39.014 00:50:31 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:39.014 00:50:31 -- target/tls.sh@28 -- # bdevperf_pid=1695753 00:16:39.014 00:50:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:39.014 00:50:31 -- target/tls.sh@31 -- # waitforlisten 1695753 /var/tmp/bdevperf.sock 00:16:39.014 00:50:31 -- common/autotest_common.sh@817 -- # '[' -z 1695753 ']' 00:16:39.014 00:50:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:39.014 00:50:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:39.014 00:50:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:39.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:39.014 00:50:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:39.014 00:50:31 -- common/autotest_common.sh@10 -- # set +x 00:16:39.014 [2024-04-27 00:50:31.634680] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:39.014 [2024-04-27 00:50:31.634725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1695753 ] 00:16:39.014 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.014 [2024-04-27 00:50:31.683597] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.273 [2024-04-27 00:50:31.753893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.273 00:50:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:39.273 00:50:31 -- common/autotest_common.sh@850 -- # return 0 00:16:39.273 00:50:31 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y5HQJPkhYG 00:16:39.533 [2024-04-27 00:50:32.002367] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:39.533 [2024-04-27 00:50:32.002434] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:39.533 TLSTESTn1 00:16:39.533 00:50:32 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:39.533 Running I/O for 10 seconds... 00:16:51.743 00:16:51.743 Latency(us) 00:16:51.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.743 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:51.743 Verification LBA range: start 0x0 length 0x2000 00:16:51.743 TLSTESTn1 : 10.07 1564.19 6.11 0.00 0.00 81604.30 7151.97 131299.95 00:16:51.743 =================================================================================================================== 00:16:51.743 Total : 1564.19 6.11 0.00 0.00 81604.30 7151.97 131299.95 00:16:51.743 0 00:16:51.743 00:50:42 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.743 00:50:42 -- target/tls.sh@45 -- # killprocess 1695753 00:16:51.743 00:50:42 -- common/autotest_common.sh@936 -- # '[' -z 1695753 ']' 00:16:51.743 00:50:42 -- common/autotest_common.sh@940 -- # kill -0 1695753 00:16:51.743 00:50:42 -- common/autotest_common.sh@941 -- # uname 00:16:51.743 00:50:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.743 00:50:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1695753 00:16:51.743 00:50:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:51.743 00:50:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:51.743 00:50:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1695753' 00:16:51.743 killing process with pid 1695753 00:16:51.743 00:50:42 -- common/autotest_common.sh@955 -- # kill 1695753 00:16:51.743 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.743 00:16:51.743 Latency(us) 00:16:51.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.743 =================================================================================================================== 00:16:51.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.743 [2024-04-27 00:50:42.355353] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:51.743 00:50:42 -- common/autotest_common.sh@960 -- # wait 1695753 00:16:51.743 00:50:42 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Y5HQJPkhYG 00:16:51.743 00:50:42 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y5HQJPkhYG 00:16:51.743 00:50:42 -- common/autotest_common.sh@638 -- # local es=0 00:16:51.743 00:50:42 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y5HQJPkhYG 00:16:51.743 00:50:42 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:16:51.743 00:50:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.743 00:50:42 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:16:51.743 00:50:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:51.744 00:50:42 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y5HQJPkhYG 00:16:51.744 00:50:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:51.744 00:50:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:51.744 00:50:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:51.744 00:50:42 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Y5HQJPkhYG' 00:16:51.744 00:50:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.744 00:50:42 -- target/tls.sh@28 -- # bdevperf_pid=1697556 00:16:51.744 00:50:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.744 00:50:42 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:51.744 00:50:42 -- target/tls.sh@31 -- # waitforlisten 1697556 /var/tmp/bdevperf.sock 00:16:51.744 00:50:42 -- common/autotest_common.sh@817 -- # '[' -z 1697556 ']' 00:16:51.744 00:50:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.744 00:50:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:51.744 00:50:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.744 00:50:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:51.744 00:50:42 -- common/autotest_common.sh@10 -- # set +x 00:16:51.744 [2024-04-27 00:50:42.613131] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:51.744 [2024-04-27 00:50:42.613179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697556 ] 00:16:51.744 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.744 [2024-04-27 00:50:42.662714] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.744 [2024-04-27 00:50:42.738023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.744 00:50:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.744 00:50:43 -- common/autotest_common.sh@850 -- # return 0 00:16:51.744 00:50:43 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y5HQJPkhYG 00:16:51.744 [2024-04-27 00:50:43.572697] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.744 [2024-04-27 00:50:43.572745] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:51.744 [2024-04-27 00:50:43.572752] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Y5HQJPkhYG 00:16:51.744 request: 00:16:51.744 { 00:16:51.744 "name": "TLSTEST", 00:16:51.744 "trtype": "tcp", 00:16:51.744 "traddr": "10.0.0.2", 00:16:51.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.744 "adrfam": "ipv4", 00:16:51.744 "trsvcid": "4420", 00:16:51.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.744 "psk": "/tmp/tmp.Y5HQJPkhYG", 00:16:51.744 "method": "bdev_nvme_attach_controller", 00:16:51.744 "req_id": 1 00:16:51.744 } 00:16:51.744 Got JSON-RPC error response 00:16:51.744 response: 00:16:51.744 { 00:16:51.744 "code": -1, 00:16:51.744 "message": "Operation not permitted" 00:16:51.744 } 00:16:51.744 00:50:43 -- target/tls.sh@36 -- # killprocess 1697556 00:16:51.744 00:50:43 -- common/autotest_common.sh@936 -- # '[' -z 1697556 ']' 00:16:51.744 00:50:43 -- common/autotest_common.sh@940 -- # kill -0 1697556 00:16:51.744 00:50:43 -- common/autotest_common.sh@941 -- # uname 00:16:51.744 00:50:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.744 00:50:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1697556 00:16:51.744 00:50:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:51.744 00:50:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:51.744 00:50:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1697556' 00:16:51.744 killing process with pid 1697556 00:16:51.744 00:50:43 -- common/autotest_common.sh@955 -- # kill 1697556 00:16:51.744 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.744 00:16:51.744 Latency(us) 00:16:51.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.744 =================================================================================================================== 00:16:51.744 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:51.744 00:50:43 -- common/autotest_common.sh@960 -- # wait 1697556 00:16:51.744 00:50:43 -- target/tls.sh@37 -- # return 1 00:16:51.744 00:50:43 -- common/autotest_common.sh@641 -- # es=1 00:16:51.744 00:50:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:51.744 00:50:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:51.744 00:50:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:51.744 00:50:43 -- target/tls.sh@174 -- # killprocess 1695321 00:16:51.744 00:50:43 -- common/autotest_common.sh@936 -- # '[' -z 1695321 ']' 00:16:51.744 00:50:43 -- common/autotest_common.sh@940 -- # kill -0 1695321 00:16:51.744 00:50:43 -- common/autotest_common.sh@941 -- # uname 00:16:51.744 00:50:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:51.744 00:50:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1695321 00:16:51.744 00:50:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:51.744 00:50:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:51.744 00:50:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1695321' 00:16:51.744 killing process with pid 1695321 00:16:51.744 00:50:43 -- common/autotest_common.sh@955 -- # kill 1695321 00:16:51.744 [2024-04-27 00:50:43.873438] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:51.744 00:50:43 -- common/autotest_common.sh@960 -- # wait 1695321 00:16:51.744 00:50:44 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:16:51.744 00:50:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:51.744 00:50:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:51.744 00:50:44 -- common/autotest_common.sh@10 -- # set +x 00:16:51.744 00:50:44 -- nvmf/common.sh@470 -- # nvmfpid=1697833 00:16:51.744 00:50:44 -- nvmf/common.sh@471 -- # waitforlisten 1697833 00:16:51.744 00:50:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:51.744 00:50:44 -- common/autotest_common.sh@817 -- # '[' -z 1697833 ']' 00:16:51.744 00:50:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.744 00:50:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:51.744 00:50:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.744 00:50:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:51.744 00:50:44 -- common/autotest_common.sh@10 -- # set +x 00:16:51.744 [2024-04-27 00:50:44.140988] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:51.744 [2024-04-27 00:50:44.141034] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.744 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.744 [2024-04-27 00:50:44.196805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.744 [2024-04-27 00:50:44.273395] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.744 [2024-04-27 00:50:44.273429] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.744 [2024-04-27 00:50:44.273436] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.744 [2024-04-27 00:50:44.273443] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.744 [2024-04-27 00:50:44.273450] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.744 [2024-04-27 00:50:44.273465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.313 00:50:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:52.313 00:50:44 -- common/autotest_common.sh@850 -- # return 0 00:16:52.313 00:50:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:52.313 00:50:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:52.313 00:50:44 -- common/autotest_common.sh@10 -- # set +x 00:16:52.313 00:50:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.313 00:50:44 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Y5HQJPkhYG 00:16:52.313 00:50:44 -- common/autotest_common.sh@638 -- # local es=0 00:16:52.313 00:50:44 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Y5HQJPkhYG 00:16:52.313 00:50:44 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:16:52.313 00:50:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:52.313 00:50:44 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:16:52.313 00:50:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:52.313 00:50:44 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.Y5HQJPkhYG 00:16:52.313 00:50:44 -- target/tls.sh@49 -- # local key=/tmp/tmp.Y5HQJPkhYG 00:16:52.313 00:50:44 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:52.572 [2024-04-27 00:50:45.136583] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.572 00:50:45 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:52.855 00:50:45 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:52.855 [2024-04-27 00:50:45.481474] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:52.855 [2024-04-27 00:50:45.481661] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.855 00:50:45 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:53.114 malloc0 00:16:53.114 00:50:45 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:53.373 00:50:45 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y5HQJPkhYG 00:16:53.373 [2024-04-27 00:50:45.958861] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:16:53.373 [2024-04-27 00:50:45.958886] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:16:53.373 [2024-04-27 00:50:45.958904] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:16:53.373 request: 00:16:53.373 { 00:16:53.373 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.373 "host": "nqn.2016-06.io.spdk:host1", 00:16:53.373 "psk": "/tmp/tmp.Y5HQJPkhYG", 00:16:53.373 "method": "nvmf_subsystem_add_host", 00:16:53.373 "req_id": 1 00:16:53.373 } 00:16:53.373 Got JSON-RPC error response 00:16:53.373 response: 00:16:53.373 { 00:16:53.373 "code": -32603, 00:16:53.373 "message": "Internal error" 00:16:53.373 } 00:16:53.373 00:50:45 -- common/autotest_common.sh@641 -- # es=1 00:16:53.373 00:50:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:53.373 00:50:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:53.373 00:50:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:53.373 00:50:45 -- target/tls.sh@180 -- # killprocess 1697833 00:16:53.373 00:50:45 -- common/autotest_common.sh@936 -- # '[' -z 1697833 ']' 00:16:53.373 00:50:45 -- common/autotest_common.sh@940 -- # kill -0 1697833 00:16:53.373 00:50:45 -- common/autotest_common.sh@941 -- # uname 00:16:53.373 00:50:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.373 00:50:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1697833 00:16:53.373 00:50:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:53.374 00:50:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:53.374 00:50:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1697833' 00:16:53.374 killing process with pid 1697833 00:16:53.374 00:50:46 -- common/autotest_common.sh@955 -- # kill 1697833 00:16:53.374 00:50:46 -- common/autotest_common.sh@960 -- # wait 1697833 00:16:53.633 00:50:46 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Y5HQJPkhYG 00:16:53.633 00:50:46 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:16:53.633 00:50:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:53.633 00:50:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:53.633 00:50:46 -- common/autotest_common.sh@10 -- # set +x 00:16:53.633 00:50:46 -- nvmf/common.sh@470 -- # nvmfpid=1698103 00:16:53.633 00:50:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:53.633 00:50:46 -- nvmf/common.sh@471 -- # waitforlisten 1698103 00:16:53.633 00:50:46 -- common/autotest_common.sh@817 -- # '[' -z 1698103 ']' 00:16:53.633 00:50:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.633 00:50:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:53.633 00:50:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.633 00:50:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:53.633 00:50:46 -- common/autotest_common.sh@10 -- # set +x 00:16:53.633 [2024-04-27 00:50:46.289240] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:53.633 [2024-04-27 00:50:46.289285] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.633 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.891 [2024-04-27 00:50:46.346303] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.891 [2024-04-27 00:50:46.423597] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.891 [2024-04-27 00:50:46.423632] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.891 [2024-04-27 00:50:46.423638] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.891 [2024-04-27 00:50:46.423645] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.891 [2024-04-27 00:50:46.423650] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.891 [2024-04-27 00:50:46.423664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.459 00:50:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:54.459 00:50:47 -- common/autotest_common.sh@850 -- # return 0 00:16:54.459 00:50:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:54.459 00:50:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:54.459 00:50:47 -- common/autotest_common.sh@10 -- # set +x 00:16:54.459 00:50:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.459 00:50:47 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Y5HQJPkhYG 00:16:54.459 00:50:47 -- target/tls.sh@49 -- # local key=/tmp/tmp.Y5HQJPkhYG 00:16:54.459 00:50:47 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:54.717 [2024-04-27 00:50:47.283384] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.717 00:50:47 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:54.976 00:50:47 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:54.976 [2024-04-27 00:50:47.628271] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:54.976 [2024-04-27 00:50:47.628456] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.976 00:50:47 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:55.235 malloc0 00:16:55.235 00:50:47 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:55.494 00:50:47 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y5HQJPkhYG 00:16:55.494 [2024-04-27 00:50:48.145901] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:55.494 00:50:48 -- target/tls.sh@188 -- # bdevperf_pid=1698577 00:16:55.494 00:50:48 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:55.494 00:50:48 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.494 00:50:48 -- target/tls.sh@191 -- # waitforlisten 1698577 /var/tmp/bdevperf.sock 00:16:55.494 00:50:48 -- common/autotest_common.sh@817 -- # '[' -z 1698577 ']' 00:16:55.494 00:50:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.494 00:50:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:55.494 00:50:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.494 00:50:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:55.494 00:50:48 -- common/autotest_common.sh@10 -- # set +x 00:16:55.753 [2024-04-27 00:50:48.203310] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:55.753 [2024-04-27 00:50:48.203356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1698577 ] 00:16:55.753 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.753 [2024-04-27 00:50:48.252494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.753 [2024-04-27 00:50:48.325569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.321 00:50:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:56.321 00:50:49 -- common/autotest_common.sh@850 -- # return 0 00:16:56.321 00:50:49 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y5HQJPkhYG 00:16:56.579 [2024-04-27 00:50:49.164001] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:56.579 [2024-04-27 00:50:49.164077] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:56.579 TLSTESTn1 00:16:56.838 00:50:49 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:16:57.098 00:50:49 -- target/tls.sh@196 -- # tgtconf='{ 00:16:57.098 "subsystems": [ 00:16:57.098 { 00:16:57.098 "subsystem": "keyring", 00:16:57.098 "config": [] 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "subsystem": "iobuf", 00:16:57.098 "config": [ 00:16:57.098 { 00:16:57.098 "method": "iobuf_set_options", 00:16:57.098 "params": { 00:16:57.098 "small_pool_count": 8192, 00:16:57.098 "large_pool_count": 1024, 00:16:57.098 "small_bufsize": 8192, 00:16:57.098 "large_bufsize": 135168 00:16:57.098 } 00:16:57.098 } 00:16:57.098 ] 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "subsystem": "sock", 00:16:57.098 "config": [ 00:16:57.098 { 00:16:57.098 "method": "sock_impl_set_options", 00:16:57.098 "params": { 00:16:57.098 "impl_name": "posix", 00:16:57.098 "recv_buf_size": 2097152, 00:16:57.098 "send_buf_size": 2097152, 00:16:57.098 "enable_recv_pipe": true, 00:16:57.098 "enable_quickack": false, 00:16:57.098 "enable_placement_id": 0, 00:16:57.098 "enable_zerocopy_send_server": true, 00:16:57.098 "enable_zerocopy_send_client": false, 00:16:57.098 "zerocopy_threshold": 0, 00:16:57.098 "tls_version": 0, 00:16:57.098 "enable_ktls": false 00:16:57.098 } 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "method": "sock_impl_set_options", 00:16:57.098 "params": { 00:16:57.098 "impl_name": "ssl", 00:16:57.098 "recv_buf_size": 4096, 00:16:57.098 "send_buf_size": 4096, 00:16:57.098 "enable_recv_pipe": true, 00:16:57.098 "enable_quickack": false, 00:16:57.098 "enable_placement_id": 0, 00:16:57.098 "enable_zerocopy_send_server": true, 00:16:57.098 "enable_zerocopy_send_client": false, 00:16:57.098 "zerocopy_threshold": 0, 00:16:57.098 "tls_version": 0, 00:16:57.098 "enable_ktls": false 00:16:57.098 } 00:16:57.098 } 00:16:57.098 ] 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "subsystem": "vmd", 00:16:57.098 "config": [] 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "subsystem": "accel", 00:16:57.098 "config": [ 00:16:57.098 { 00:16:57.098 "method": "accel_set_options", 00:16:57.098 "params": { 00:16:57.098 "small_cache_size": 128, 00:16:57.098 "large_cache_size": 16, 00:16:57.098 "task_count": 2048, 00:16:57.098 "sequence_count": 2048, 00:16:57.098 "buf_count": 2048 00:16:57.098 } 00:16:57.098 } 00:16:57.098 ] 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "subsystem": "bdev", 00:16:57.098 "config": [ 00:16:57.098 { 00:16:57.098 "method": "bdev_set_options", 00:16:57.098 "params": { 00:16:57.098 "bdev_io_pool_size": 65535, 00:16:57.098 "bdev_io_cache_size": 256, 00:16:57.098 "bdev_auto_examine": true, 00:16:57.098 "iobuf_small_cache_size": 128, 00:16:57.098 "iobuf_large_cache_size": 16 00:16:57.098 } 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "method": "bdev_raid_set_options", 00:16:57.098 "params": { 00:16:57.098 "process_window_size_kb": 1024 00:16:57.098 } 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "method": "bdev_iscsi_set_options", 00:16:57.098 "params": { 00:16:57.098 "timeout_sec": 30 00:16:57.098 } 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "method": "bdev_nvme_set_options", 00:16:57.098 "params": { 00:16:57.098 "action_on_timeout": "none", 00:16:57.098 "timeout_us": 0, 00:16:57.098 "timeout_admin_us": 0, 00:16:57.098 "keep_alive_timeout_ms": 10000, 00:16:57.098 "arbitration_burst": 0, 00:16:57.098 "low_priority_weight": 0, 00:16:57.098 "medium_priority_weight": 0, 00:16:57.098 "high_priority_weight": 0, 00:16:57.098 "nvme_adminq_poll_period_us": 10000, 00:16:57.098 "nvme_ioq_poll_period_us": 0, 00:16:57.098 "io_queue_requests": 0, 00:16:57.098 "delay_cmd_submit": true, 00:16:57.098 "transport_retry_count": 4, 00:16:57.098 "bdev_retry_count": 3, 00:16:57.098 "transport_ack_timeout": 0, 00:16:57.098 "ctrlr_loss_timeout_sec": 0, 00:16:57.098 "reconnect_delay_sec": 0, 00:16:57.098 "fast_io_fail_timeout_sec": 0, 00:16:57.098 "disable_auto_failback": false, 00:16:57.098 "generate_uuids": false, 00:16:57.098 "transport_tos": 0, 00:16:57.098 "nvme_error_stat": false, 00:16:57.098 "rdma_srq_size": 0, 00:16:57.098 "io_path_stat": false, 00:16:57.098 "allow_accel_sequence": false, 00:16:57.098 "rdma_max_cq_size": 0, 00:16:57.098 "rdma_cm_event_timeout_ms": 0, 00:16:57.098 "dhchap_digests": [ 00:16:57.098 "sha256", 00:16:57.098 "sha384", 00:16:57.098 "sha512" 00:16:57.098 ], 00:16:57.098 "dhchap_dhgroups": [ 00:16:57.098 "null", 00:16:57.098 "ffdhe2048", 00:16:57.098 "ffdhe3072", 00:16:57.098 "ffdhe4096", 00:16:57.098 "ffdhe6144", 00:16:57.098 "ffdhe8192" 00:16:57.098 ] 00:16:57.098 } 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "method": "bdev_nvme_set_hotplug", 00:16:57.098 "params": { 00:16:57.098 "period_us": 100000, 00:16:57.098 "enable": false 00:16:57.098 } 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "method": "bdev_malloc_create", 00:16:57.098 "params": { 00:16:57.098 "name": "malloc0", 00:16:57.098 "num_blocks": 8192, 00:16:57.098 "block_size": 4096, 00:16:57.098 "physical_block_size": 4096, 00:16:57.098 "uuid": "cead4b09-7f43-4860-af80-288327931a9f", 00:16:57.098 "optimal_io_boundary": 0 00:16:57.098 } 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "method": "bdev_wait_for_examine" 00:16:57.098 } 00:16:57.098 ] 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "subsystem": "nbd", 00:16:57.098 "config": [] 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "subsystem": "scheduler", 00:16:57.098 "config": [ 00:16:57.098 { 00:16:57.098 "method": "framework_set_scheduler", 00:16:57.098 "params": { 00:16:57.098 "name": "static" 00:16:57.098 } 00:16:57.098 } 00:16:57.098 ] 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "subsystem": "nvmf", 00:16:57.098 "config": [ 00:16:57.098 { 00:16:57.098 "method": "nvmf_set_config", 00:16:57.098 "params": { 00:16:57.098 "discovery_filter": "match_any", 00:16:57.098 "admin_cmd_passthru": { 00:16:57.098 "identify_ctrlr": false 00:16:57.098 } 00:16:57.098 } 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "method": "nvmf_set_max_subsystems", 00:16:57.098 "params": { 00:16:57.098 "max_subsystems": 1024 00:16:57.098 } 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "method": "nvmf_set_crdt", 00:16:57.098 "params": { 00:16:57.098 "crdt1": 0, 00:16:57.098 "crdt2": 0, 00:16:57.098 "crdt3": 0 00:16:57.098 } 00:16:57.098 }, 00:16:57.098 { 00:16:57.098 "method": "nvmf_create_transport", 00:16:57.098 "params": { 00:16:57.098 "trtype": "TCP", 00:16:57.098 "max_queue_depth": 128, 00:16:57.098 "max_io_qpairs_per_ctrlr": 127, 00:16:57.098 "in_capsule_data_size": 4096, 00:16:57.099 "max_io_size": 131072, 00:16:57.099 "io_unit_size": 131072, 00:16:57.099 "max_aq_depth": 128, 00:16:57.099 "num_shared_buffers": 511, 00:16:57.099 "buf_cache_size": 4294967295, 00:16:57.099 "dif_insert_or_strip": false, 00:16:57.099 "zcopy": false, 00:16:57.099 "c2h_success": false, 00:16:57.099 "sock_priority": 0, 00:16:57.099 "abort_timeout_sec": 1, 00:16:57.099 "ack_timeout": 0, 00:16:57.099 "data_wr_pool_size": 0 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "nvmf_create_subsystem", 00:16:57.099 "params": { 00:16:57.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.099 "allow_any_host": false, 00:16:57.099 "serial_number": "SPDK00000000000001", 00:16:57.099 "model_number": "SPDK bdev Controller", 00:16:57.099 "max_namespaces": 10, 00:16:57.099 "min_cntlid": 1, 00:16:57.099 "max_cntlid": 65519, 00:16:57.099 "ana_reporting": false 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "nvmf_subsystem_add_host", 00:16:57.099 "params": { 00:16:57.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.099 "host": "nqn.2016-06.io.spdk:host1", 00:16:57.099 "psk": "/tmp/tmp.Y5HQJPkhYG" 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "nvmf_subsystem_add_ns", 00:16:57.099 "params": { 00:16:57.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.099 "namespace": { 00:16:57.099 "nsid": 1, 00:16:57.099 "bdev_name": "malloc0", 00:16:57.099 "nguid": "CEAD4B097F434860AF80288327931A9F", 00:16:57.099 "uuid": "cead4b09-7f43-4860-af80-288327931a9f", 00:16:57.099 "no_auto_visible": false 00:16:57.099 } 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "nvmf_subsystem_add_listener", 00:16:57.099 "params": { 00:16:57.099 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.099 "listen_address": { 00:16:57.099 "trtype": "TCP", 00:16:57.099 "adrfam": "IPv4", 00:16:57.099 "traddr": "10.0.0.2", 00:16:57.099 "trsvcid": "4420" 00:16:57.099 }, 00:16:57.099 "secure_channel": true 00:16:57.099 } 00:16:57.099 } 00:16:57.099 ] 00:16:57.099 } 00:16:57.099 ] 00:16:57.099 }' 00:16:57.099 00:50:49 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:57.099 00:50:49 -- target/tls.sh@197 -- # bdevperfconf='{ 00:16:57.099 "subsystems": [ 00:16:57.099 { 00:16:57.099 "subsystem": "keyring", 00:16:57.099 "config": [] 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "subsystem": "iobuf", 00:16:57.099 "config": [ 00:16:57.099 { 00:16:57.099 "method": "iobuf_set_options", 00:16:57.099 "params": { 00:16:57.099 "small_pool_count": 8192, 00:16:57.099 "large_pool_count": 1024, 00:16:57.099 "small_bufsize": 8192, 00:16:57.099 "large_bufsize": 135168 00:16:57.099 } 00:16:57.099 } 00:16:57.099 ] 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "subsystem": "sock", 00:16:57.099 "config": [ 00:16:57.099 { 00:16:57.099 "method": "sock_impl_set_options", 00:16:57.099 "params": { 00:16:57.099 "impl_name": "posix", 00:16:57.099 "recv_buf_size": 2097152, 00:16:57.099 "send_buf_size": 2097152, 00:16:57.099 "enable_recv_pipe": true, 00:16:57.099 "enable_quickack": false, 00:16:57.099 "enable_placement_id": 0, 00:16:57.099 "enable_zerocopy_send_server": true, 00:16:57.099 "enable_zerocopy_send_client": false, 00:16:57.099 "zerocopy_threshold": 0, 00:16:57.099 "tls_version": 0, 00:16:57.099 "enable_ktls": false 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "sock_impl_set_options", 00:16:57.099 "params": { 00:16:57.099 "impl_name": "ssl", 00:16:57.099 "recv_buf_size": 4096, 00:16:57.099 "send_buf_size": 4096, 00:16:57.099 "enable_recv_pipe": true, 00:16:57.099 "enable_quickack": false, 00:16:57.099 "enable_placement_id": 0, 00:16:57.099 "enable_zerocopy_send_server": true, 00:16:57.099 "enable_zerocopy_send_client": false, 00:16:57.099 "zerocopy_threshold": 0, 00:16:57.099 "tls_version": 0, 00:16:57.099 "enable_ktls": false 00:16:57.099 } 00:16:57.099 } 00:16:57.099 ] 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "subsystem": "vmd", 00:16:57.099 "config": [] 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "subsystem": "accel", 00:16:57.099 "config": [ 00:16:57.099 { 00:16:57.099 "method": "accel_set_options", 00:16:57.099 "params": { 00:16:57.099 "small_cache_size": 128, 00:16:57.099 "large_cache_size": 16, 00:16:57.099 "task_count": 2048, 00:16:57.099 "sequence_count": 2048, 00:16:57.099 "buf_count": 2048 00:16:57.099 } 00:16:57.099 } 00:16:57.099 ] 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "subsystem": "bdev", 00:16:57.099 "config": [ 00:16:57.099 { 00:16:57.099 "method": "bdev_set_options", 00:16:57.099 "params": { 00:16:57.099 "bdev_io_pool_size": 65535, 00:16:57.099 "bdev_io_cache_size": 256, 00:16:57.099 "bdev_auto_examine": true, 00:16:57.099 "iobuf_small_cache_size": 128, 00:16:57.099 "iobuf_large_cache_size": 16 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "bdev_raid_set_options", 00:16:57.099 "params": { 00:16:57.099 "process_window_size_kb": 1024 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "bdev_iscsi_set_options", 00:16:57.099 "params": { 00:16:57.099 "timeout_sec": 30 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "bdev_nvme_set_options", 00:16:57.099 "params": { 00:16:57.099 "action_on_timeout": "none", 00:16:57.099 "timeout_us": 0, 00:16:57.099 "timeout_admin_us": 0, 00:16:57.099 "keep_alive_timeout_ms": 10000, 00:16:57.099 "arbitration_burst": 0, 00:16:57.099 "low_priority_weight": 0, 00:16:57.099 "medium_priority_weight": 0, 00:16:57.099 "high_priority_weight": 0, 00:16:57.099 "nvme_adminq_poll_period_us": 10000, 00:16:57.099 "nvme_ioq_poll_period_us": 0, 00:16:57.099 "io_queue_requests": 512, 00:16:57.099 "delay_cmd_submit": true, 00:16:57.099 "transport_retry_count": 4, 00:16:57.099 "bdev_retry_count": 3, 00:16:57.099 "transport_ack_timeout": 0, 00:16:57.099 "ctrlr_loss_timeout_sec": 0, 00:16:57.099 "reconnect_delay_sec": 0, 00:16:57.099 "fast_io_fail_timeout_sec": 0, 00:16:57.099 "disable_auto_failback": false, 00:16:57.099 "generate_uuids": false, 00:16:57.099 "transport_tos": 0, 00:16:57.099 "nvme_error_stat": false, 00:16:57.099 "rdma_srq_size": 0, 00:16:57.099 "io_path_stat": false, 00:16:57.099 "allow_accel_sequence": false, 00:16:57.099 "rdma_max_cq_size": 0, 00:16:57.099 "rdma_cm_event_timeout_ms": 0, 00:16:57.099 "dhchap_digests": [ 00:16:57.099 "sha256", 00:16:57.099 "sha384", 00:16:57.099 "sha512" 00:16:57.099 ], 00:16:57.099 "dhchap_dhgroups": [ 00:16:57.099 "null", 00:16:57.099 "ffdhe2048", 00:16:57.099 "ffdhe3072", 00:16:57.099 "ffdhe4096", 00:16:57.099 "ffdhe6144", 00:16:57.099 "ffdhe8192" 00:16:57.099 ] 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "bdev_nvme_attach_controller", 00:16:57.099 "params": { 00:16:57.099 "name": "TLSTEST", 00:16:57.099 "trtype": "TCP", 00:16:57.099 "adrfam": "IPv4", 00:16:57.099 "traddr": "10.0.0.2", 00:16:57.099 "trsvcid": "4420", 00:16:57.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.099 "prchk_reftag": false, 00:16:57.099 "prchk_guard": false, 00:16:57.099 "ctrlr_loss_timeout_sec": 0, 00:16:57.099 "reconnect_delay_sec": 0, 00:16:57.099 "fast_io_fail_timeout_sec": 0, 00:16:57.099 "psk": "/tmp/tmp.Y5HQJPkhYG", 00:16:57.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:57.099 "hdgst": false, 00:16:57.099 "ddgst": false 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "bdev_nvme_set_hotplug", 00:16:57.099 "params": { 00:16:57.099 "period_us": 100000, 00:16:57.099 "enable": false 00:16:57.099 } 00:16:57.099 }, 00:16:57.099 { 00:16:57.099 "method": "bdev_wait_for_examine" 00:16:57.099 } 00:16:57.099 ] 00:16:57.099 }, 00:16:57.100 { 00:16:57.100 "subsystem": "nbd", 00:16:57.100 "config": [] 00:16:57.100 } 00:16:57.100 ] 00:16:57.100 }' 00:16:57.100 00:50:49 -- target/tls.sh@199 -- # killprocess 1698577 00:16:57.100 00:50:49 -- common/autotest_common.sh@936 -- # '[' -z 1698577 ']' 00:16:57.100 00:50:49 -- common/autotest_common.sh@940 -- # kill -0 1698577 00:16:57.100 00:50:49 -- common/autotest_common.sh@941 -- # uname 00:16:57.100 00:50:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.358 00:50:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1698577 00:16:57.358 00:50:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:57.358 00:50:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:57.358 00:50:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1698577' 00:16:57.358 killing process with pid 1698577 00:16:57.358 00:50:49 -- common/autotest_common.sh@955 -- # kill 1698577 00:16:57.358 Received shutdown signal, test time was about 10.000000 seconds 00:16:57.358 00:16:57.358 Latency(us) 00:16:57.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.358 =================================================================================================================== 00:16:57.358 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:57.358 [2024-04-27 00:50:49.832495] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:57.358 00:50:49 -- common/autotest_common.sh@960 -- # wait 1698577 00:16:57.358 00:50:50 -- target/tls.sh@200 -- # killprocess 1698103 00:16:57.358 00:50:50 -- common/autotest_common.sh@936 -- # '[' -z 1698103 ']' 00:16:57.358 00:50:50 -- common/autotest_common.sh@940 -- # kill -0 1698103 00:16:57.358 00:50:50 -- common/autotest_common.sh@941 -- # uname 00:16:57.358 00:50:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:57.358 00:50:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1698103 00:16:57.617 00:50:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:57.617 00:50:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:57.617 00:50:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1698103' 00:16:57.617 killing process with pid 1698103 00:16:57.617 00:50:50 -- common/autotest_common.sh@955 -- # kill 1698103 00:16:57.617 [2024-04-27 00:50:50.088494] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:57.617 00:50:50 -- common/autotest_common.sh@960 -- # wait 1698103 00:16:57.617 00:50:50 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:57.617 00:50:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:57.617 00:50:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:57.617 00:50:50 -- target/tls.sh@203 -- # echo '{ 00:16:57.617 "subsystems": [ 00:16:57.617 { 00:16:57.617 "subsystem": "keyring", 00:16:57.617 "config": [] 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "subsystem": "iobuf", 00:16:57.617 "config": [ 00:16:57.617 { 00:16:57.617 "method": "iobuf_set_options", 00:16:57.617 "params": { 00:16:57.617 "small_pool_count": 8192, 00:16:57.617 "large_pool_count": 1024, 00:16:57.617 "small_bufsize": 8192, 00:16:57.617 "large_bufsize": 135168 00:16:57.617 } 00:16:57.617 } 00:16:57.617 ] 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "subsystem": "sock", 00:16:57.617 "config": [ 00:16:57.617 { 00:16:57.617 "method": "sock_impl_set_options", 00:16:57.617 "params": { 00:16:57.617 "impl_name": "posix", 00:16:57.617 "recv_buf_size": 2097152, 00:16:57.617 "send_buf_size": 2097152, 00:16:57.617 "enable_recv_pipe": true, 00:16:57.617 "enable_quickack": false, 00:16:57.617 "enable_placement_id": 0, 00:16:57.617 "enable_zerocopy_send_server": true, 00:16:57.617 "enable_zerocopy_send_client": false, 00:16:57.617 "zerocopy_threshold": 0, 00:16:57.617 "tls_version": 0, 00:16:57.617 "enable_ktls": false 00:16:57.617 } 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "method": "sock_impl_set_options", 00:16:57.617 "params": { 00:16:57.617 "impl_name": "ssl", 00:16:57.617 "recv_buf_size": 4096, 00:16:57.617 "send_buf_size": 4096, 00:16:57.617 "enable_recv_pipe": true, 00:16:57.617 "enable_quickack": false, 00:16:57.617 "enable_placement_id": 0, 00:16:57.617 "enable_zerocopy_send_server": true, 00:16:57.617 "enable_zerocopy_send_client": false, 00:16:57.617 "zerocopy_threshold": 0, 00:16:57.617 "tls_version": 0, 00:16:57.617 "enable_ktls": false 00:16:57.617 } 00:16:57.617 } 00:16:57.617 ] 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "subsystem": "vmd", 00:16:57.617 "config": [] 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "subsystem": "accel", 00:16:57.617 "config": [ 00:16:57.617 { 00:16:57.617 "method": "accel_set_options", 00:16:57.617 "params": { 00:16:57.617 "small_cache_size": 128, 00:16:57.617 "large_cache_size": 16, 00:16:57.617 "task_count": 2048, 00:16:57.617 "sequence_count": 2048, 00:16:57.617 "buf_count": 2048 00:16:57.617 } 00:16:57.617 } 00:16:57.617 ] 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "subsystem": "bdev", 00:16:57.617 "config": [ 00:16:57.617 { 00:16:57.617 "method": "bdev_set_options", 00:16:57.617 "params": { 00:16:57.617 "bdev_io_pool_size": 65535, 00:16:57.617 "bdev_io_cache_size": 256, 00:16:57.617 "bdev_auto_examine": true, 00:16:57.617 "iobuf_small_cache_size": 128, 00:16:57.617 "iobuf_large_cache_size": 16 00:16:57.617 } 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "method": "bdev_raid_set_options", 00:16:57.617 "params": { 00:16:57.617 "process_window_size_kb": 1024 00:16:57.617 } 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "method": "bdev_iscsi_set_options", 00:16:57.617 "params": { 00:16:57.617 "timeout_sec": 30 00:16:57.617 } 00:16:57.617 }, 00:16:57.617 { 00:16:57.617 "method": "bdev_nvme_set_options", 00:16:57.617 "params": { 00:16:57.617 "action_on_timeout": "none", 00:16:57.617 "timeout_us": 0, 00:16:57.617 "timeout_admin_us": 0, 00:16:57.617 "keep_alive_timeout_ms": 10000, 00:16:57.617 "arbitration_burst": 0, 00:16:57.617 "low_priority_weight": 0, 00:16:57.617 "medium_priority_weight": 0, 00:16:57.617 "high_priority_weight": 0, 00:16:57.617 "nvme_adminq_poll_period_us": 10000, 00:16:57.617 "nvme_ioq_poll_period_us": 0, 00:16:57.617 "io_queue_requests": 0, 00:16:57.617 "delay_cmd_submit": true, 00:16:57.617 "transport_retry_count": 4, 00:16:57.617 "bdev_retry_count": 3, 00:16:57.617 "transport_ack_timeout": 0, 00:16:57.617 "ctrlr_loss_timeout_sec": 0, 00:16:57.617 "reconnect_delay_sec": 0, 00:16:57.617 "fast_io_fail_timeout_sec": 0, 00:16:57.617 "disable_auto_failback": false, 00:16:57.617 "generate_uuids": false, 00:16:57.617 "transport_tos": 0, 00:16:57.617 "nvme_error_stat": false, 00:16:57.617 "rdma_srq_size": 0, 00:16:57.617 "io_path_stat": false, 00:16:57.617 "allow_accel_sequence": false, 00:16:57.617 "rdma_max_cq_size": 0, 00:16:57.617 "rdma_cm_event_timeout_ms": 0, 00:16:57.617 "dhchap_digests": [ 00:16:57.617 "sha256", 00:16:57.617 "sha384", 00:16:57.617 "sha512" 00:16:57.617 ], 00:16:57.617 "dhchap_dhgroups": [ 00:16:57.617 "null", 00:16:57.618 "ffdhe2048", 00:16:57.618 "ffdhe3072", 00:16:57.618 "ffdhe4096", 00:16:57.618 "ffdhe6144", 00:16:57.618 "ffdhe8192" 00:16:57.618 ] 00:16:57.618 } 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "method": "bdev_nvme_set_hotplug", 00:16:57.618 "params": { 00:16:57.618 "period_us": 100000, 00:16:57.618 "enable": false 00:16:57.618 } 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "method": "bdev_malloc_create", 00:16:57.618 "params": { 00:16:57.618 "name": "malloc0", 00:16:57.618 "num_blocks": 8192, 00:16:57.618 "block_size": 4096, 00:16:57.618 "physical_block_size": 4096, 00:16:57.618 "uuid": "cead4b09-7f43-4860-af80-288327931a9f", 00:16:57.618 "optimal_io_boundary": 0 00:16:57.618 } 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "method": "bdev_wait_for_examine" 00:16:57.618 } 00:16:57.618 ] 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "subsystem": "nbd", 00:16:57.618 "config": [] 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "subsystem": "scheduler", 00:16:57.618 "config": [ 00:16:57.618 { 00:16:57.618 "method": "framework_set_scheduler", 00:16:57.618 "params": { 00:16:57.618 "name": "static" 00:16:57.618 } 00:16:57.618 } 00:16:57.618 ] 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "subsystem": "nvmf", 00:16:57.618 "config": [ 00:16:57.618 { 00:16:57.618 "method": "nvmf_set_config", 00:16:57.618 "params": { 00:16:57.618 "discovery_filter": "match_any", 00:16:57.618 "admin_cmd_passthru": { 00:16:57.618 "identify_ctrlr": false 00:16:57.618 } 00:16:57.618 } 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "method": "nvmf_set_max_subsystems", 00:16:57.618 "params": { 00:16:57.618 "max_subsystems": 1024 00:16:57.618 } 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "method": "nvmf_set_crdt", 00:16:57.618 "params": { 00:16:57.618 "crdt1": 0, 00:16:57.618 "crdt2": 0, 00:16:57.618 "crdt3": 0 00:16:57.618 } 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "method": "nvmf_create_transport", 00:16:57.618 "params": { 00:16:57.618 "trtype": "TCP", 00:16:57.618 "max_queue_depth": 128, 00:16:57.618 "max_io_qpairs_per_ctrlr": 127, 00:16:57.618 "in_capsule_data_size": 4096, 00:16:57.618 "max_io_size": 131072, 00:16:57.618 "io_unit_size": 131072, 00:16:57.618 "max_aq_depth": 128, 00:16:57.618 "num_shared_buffers": 511, 00:16:57.618 "buf_cache_size": 4294967295, 00:16:57.618 "dif_insert_or_strip": false, 00:16:57.618 "zcopy": false, 00:16:57.618 "c2h_success": false, 00:16:57.618 "sock_priority": 0, 00:16:57.618 "abort_timeout_sec": 1, 00:16:57.618 "ack_timeout": 0, 00:16:57.618 "data_wr_pool_size": 0 00:16:57.618 } 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "method": "nvmf_create_subsystem", 00:16:57.618 "params": { 00:16:57.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.618 "allow_any_host": false, 00:16:57.618 "serial_number": "SPDK00000000000001", 00:16:57.618 "model_number": "SPDK bdev Controller", 00:16:57.618 "max_namespaces": 10, 00:16:57.618 "min_cntlid": 1, 00:16:57.618 "max_cntlid": 65519, 00:16:57.618 "ana_reporting": false 00:16:57.618 } 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "method": "nvmf_subsystem_add_host", 00:16:57.618 "params": { 00:16:57.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.618 "host": "nqn.2016-06.io.spdk:host1", 00:16:57.618 "psk": "/tmp/tmp.Y5HQJPkhYG" 00:16:57.618 } 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "method": "nvmf_subsystem_add_ns", 00:16:57.618 "params": { 00:16:57.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.618 "namespace": { 00:16:57.618 "nsid": 1, 00:16:57.618 "bdev_name": "malloc0", 00:16:57.618 "nguid": "CEAD4B097F434860AF80288327931A9F", 00:16:57.618 "uuid": "cead4b09-7f43-4860-af80-288327931a9f", 00:16:57.618 "no_auto_visible": false 00:16:57.618 } 00:16:57.618 } 00:16:57.618 }, 00:16:57.618 { 00:16:57.618 "method": "nvmf_subsystem_add_listener", 00:16:57.618 "params": { 00:16:57.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:57.618 "listen_address": { 00:16:57.618 "trtype": "TCP", 00:16:57.618 "adrfam": "IPv4", 00:16:57.618 "traddr": "10.0.0.2", 00:16:57.618 "trsvcid": "4420" 00:16:57.618 }, 00:16:57.618 "secure_channel": true 00:16:57.618 } 00:16:57.618 } 00:16:57.618 ] 00:16:57.618 } 00:16:57.618 ] 00:16:57.618 }' 00:16:57.618 00:50:50 -- common/autotest_common.sh@10 -- # set +x 00:16:57.877 00:50:50 -- nvmf/common.sh@470 -- # nvmfpid=1698838 00:16:57.877 00:50:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:57.877 00:50:50 -- nvmf/common.sh@471 -- # waitforlisten 1698838 00:16:57.877 00:50:50 -- common/autotest_common.sh@817 -- # '[' -z 1698838 ']' 00:16:57.877 00:50:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.877 00:50:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:57.877 00:50:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.877 00:50:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:57.877 00:50:50 -- common/autotest_common.sh@10 -- # set +x 00:16:57.877 [2024-04-27 00:50:50.359895] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:57.877 [2024-04-27 00:50:50.359940] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.877 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.877 [2024-04-27 00:50:50.415885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.877 [2024-04-27 00:50:50.491856] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.877 [2024-04-27 00:50:50.491889] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.877 [2024-04-27 00:50:50.491896] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.877 [2024-04-27 00:50:50.491902] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.877 [2024-04-27 00:50:50.491908] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.877 [2024-04-27 00:50:50.491962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.136 [2024-04-27 00:50:50.685848] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.137 [2024-04-27 00:50:50.701824] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:58.137 [2024-04-27 00:50:50.717865] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:58.137 [2024-04-27 00:50:50.733378] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.706 00:50:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:58.706 00:50:51 -- common/autotest_common.sh@850 -- # return 0 00:16:58.706 00:50:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:58.706 00:50:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:58.706 00:50:51 -- common/autotest_common.sh@10 -- # set +x 00:16:58.706 00:50:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.706 00:50:51 -- target/tls.sh@207 -- # bdevperf_pid=1699080 00:16:58.706 00:50:51 -- target/tls.sh@208 -- # waitforlisten 1699080 /var/tmp/bdevperf.sock 00:16:58.706 00:50:51 -- common/autotest_common.sh@817 -- # '[' -z 1699080 ']' 00:16:58.706 00:50:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.706 00:50:51 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:58.706 00:50:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:58.706 00:50:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.706 00:50:51 -- target/tls.sh@204 -- # echo '{ 00:16:58.706 "subsystems": [ 00:16:58.706 { 00:16:58.706 "subsystem": "keyring", 00:16:58.706 "config": [] 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "subsystem": "iobuf", 00:16:58.706 "config": [ 00:16:58.706 { 00:16:58.706 "method": "iobuf_set_options", 00:16:58.706 "params": { 00:16:58.706 "small_pool_count": 8192, 00:16:58.706 "large_pool_count": 1024, 00:16:58.706 "small_bufsize": 8192, 00:16:58.706 "large_bufsize": 135168 00:16:58.706 } 00:16:58.706 } 00:16:58.706 ] 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "subsystem": "sock", 00:16:58.706 "config": [ 00:16:58.706 { 00:16:58.706 "method": "sock_impl_set_options", 00:16:58.706 "params": { 00:16:58.706 "impl_name": "posix", 00:16:58.706 "recv_buf_size": 2097152, 00:16:58.706 "send_buf_size": 2097152, 00:16:58.706 "enable_recv_pipe": true, 00:16:58.706 "enable_quickack": false, 00:16:58.706 "enable_placement_id": 0, 00:16:58.706 "enable_zerocopy_send_server": true, 00:16:58.706 "enable_zerocopy_send_client": false, 00:16:58.706 "zerocopy_threshold": 0, 00:16:58.706 "tls_version": 0, 00:16:58.706 "enable_ktls": false 00:16:58.706 } 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "method": "sock_impl_set_options", 00:16:58.706 "params": { 00:16:58.706 "impl_name": "ssl", 00:16:58.706 "recv_buf_size": 4096, 00:16:58.706 "send_buf_size": 4096, 00:16:58.706 "enable_recv_pipe": true, 00:16:58.706 "enable_quickack": false, 00:16:58.706 "enable_placement_id": 0, 00:16:58.706 "enable_zerocopy_send_server": true, 00:16:58.706 "enable_zerocopy_send_client": false, 00:16:58.706 "zerocopy_threshold": 0, 00:16:58.706 "tls_version": 0, 00:16:58.706 "enable_ktls": false 00:16:58.706 } 00:16:58.706 } 00:16:58.706 ] 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "subsystem": "vmd", 00:16:58.706 "config": [] 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "subsystem": "accel", 00:16:58.706 "config": [ 00:16:58.706 { 00:16:58.706 "method": "accel_set_options", 00:16:58.706 "params": { 00:16:58.706 "small_cache_size": 128, 00:16:58.706 "large_cache_size": 16, 00:16:58.706 "task_count": 2048, 00:16:58.706 "sequence_count": 2048, 00:16:58.706 "buf_count": 2048 00:16:58.706 } 00:16:58.706 } 00:16:58.706 ] 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "subsystem": "bdev", 00:16:58.706 "config": [ 00:16:58.706 { 00:16:58.706 "method": "bdev_set_options", 00:16:58.706 "params": { 00:16:58.706 "bdev_io_pool_size": 65535, 00:16:58.706 "bdev_io_cache_size": 256, 00:16:58.706 "bdev_auto_examine": true, 00:16:58.706 "iobuf_small_cache_size": 128, 00:16:58.706 "iobuf_large_cache_size": 16 00:16:58.706 } 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "method": "bdev_raid_set_options", 00:16:58.706 "params": { 00:16:58.706 "process_window_size_kb": 1024 00:16:58.706 } 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "method": "bdev_iscsi_set_options", 00:16:58.706 "params": { 00:16:58.706 "timeout_sec": 30 00:16:58.706 } 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "method": "bdev_nvme_set_options", 00:16:58.706 "params": { 00:16:58.706 "action_on_timeout": "none", 00:16:58.706 "timeout_us": 0, 00:16:58.706 "timeout_admin_us": 0, 00:16:58.706 "keep_alive_timeout_ms": 10000, 00:16:58.706 "arbitration_burst": 0, 00:16:58.706 "low_priority_weight": 0, 00:16:58.706 "medium_priority_weight": 0, 00:16:58.706 "high_priority_weight": 0, 00:16:58.706 "nvme_adminq_poll_period_us": 10000, 00:16:58.706 "nvme_ioq_poll_period_us": 0, 00:16:58.706 "io_queue_requests": 512, 00:16:58.706 "delay_cmd_submit": true, 00:16:58.706 "transport_retry_count": 4, 00:16:58.706 "bdev_retry_count": 3, 00:16:58.706 "transport_ack_timeout": 0, 00:16:58.706 "ctrlr_loss_timeout_sec": 0, 00:16:58.706 "reconnect_delay_sec": 0, 00:16:58.706 "fast_io_fail_timeout_sec": 0, 00:16:58.706 "disable_auto_failback": false, 00:16:58.706 "generate_uuids": false, 00:16:58.706 "transport_tos": 0, 00:16:58.706 "nvme_error_stat": false, 00:16:58.706 "rdma_srq_size": 0, 00:16:58.706 "io_path_stat": false, 00:16:58.706 "allow_accel_sequence": false, 00:16:58.706 "rdma_max_cq_size": 0, 00:16:58.706 "rdma_cm_event_timeout_ms": 0, 00:16:58.706 "dhchap_digests": [ 00:16:58.706 "sha256", 00:16:58.706 "sha384", 00:16:58.706 "sha512" 00:16:58.706 ], 00:16:58.706 "dhchap_dhgroups": [ 00:16:58.706 "null", 00:16:58.706 "ffdhe2048", 00:16:58.706 "ffdhe3072", 00:16:58.706 "ffdhe4096", 00:16:58.706 "ffdhe6144", 00:16:58.706 "ffdhe8192" 00:16:58.706 ] 00:16:58.706 } 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "method": "bdev_nvme_attach_controller", 00:16:58.706 "params": { 00:16:58.706 "name": "TLSTEST", 00:16:58.706 "trtype": "TCP", 00:16:58.706 "adrfam": "IPv4", 00:16:58.706 "traddr": "10.0.0.2", 00:16:58.706 "trsvcid": "4420", 00:16:58.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.706 "prchk_reftag": false, 00:16:58.706 "prchk_guard": false, 00:16:58.706 "ctrlr_loss_timeout_sec": 0, 00:16:58.706 "reconnect_delay_sec": 0, 00:16:58.706 "fast_io_fail_timeout_sec": 0, 00:16:58.706 "psk": "/tmp/tmp.Y5HQJPkhYG", 00:16:58.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.706 "hdgst": false, 00:16:58.706 "ddgst": false 00:16:58.706 } 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "method": "bdev_nvme_set_hotplug", 00:16:58.706 "params": { 00:16:58.706 "period_us": 100000, 00:16:58.706 "enable": false 00:16:58.706 } 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "method": "bdev_wait_for_examine" 00:16:58.706 } 00:16:58.706 ] 00:16:58.706 }, 00:16:58.706 { 00:16:58.706 "subsystem": "nbd", 00:16:58.706 "config": [] 00:16:58.706 } 00:16:58.706 ] 00:16:58.706 }' 00:16:58.706 00:50:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:58.706 00:50:51 -- common/autotest_common.sh@10 -- # set +x 00:16:58.706 [2024-04-27 00:50:51.231908] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:58.706 [2024-04-27 00:50:51.231954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699080 ] 00:16:58.706 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.706 [2024-04-27 00:50:51.281713] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.706 [2024-04-27 00:50:51.351249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.966 [2024-04-27 00:50:51.485716] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:58.966 [2024-04-27 00:50:51.485798] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:59.533 00:50:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:59.533 00:50:52 -- common/autotest_common.sh@850 -- # return 0 00:16:59.533 00:50:52 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:59.533 Running I/O for 10 seconds... 00:17:11.738 00:17:11.738 Latency(us) 00:17:11.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.738 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:11.738 Verification LBA range: start 0x0 length 0x2000 00:17:11.738 TLSTESTn1 : 10.09 1546.24 6.04 0.00 0.00 82484.51 5185.89 128564.54 00:17:11.738 =================================================================================================================== 00:17:11.738 Total : 1546.24 6.04 0.00 0.00 82484.51 5185.89 128564.54 00:17:11.738 0 00:17:11.738 00:51:02 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:11.738 00:51:02 -- target/tls.sh@214 -- # killprocess 1699080 00:17:11.738 00:51:02 -- common/autotest_common.sh@936 -- # '[' -z 1699080 ']' 00:17:11.738 00:51:02 -- common/autotest_common.sh@940 -- # kill -0 1699080 00:17:11.738 00:51:02 -- common/autotest_common.sh@941 -- # uname 00:17:11.738 00:51:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.738 00:51:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1699080 00:17:11.738 00:51:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:11.738 00:51:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:11.738 00:51:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1699080' 00:17:11.738 killing process with pid 1699080 00:17:11.738 00:51:02 -- common/autotest_common.sh@955 -- # kill 1699080 00:17:11.738 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.738 00:17:11.738 Latency(us) 00:17:11.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.738 =================================================================================================================== 00:17:11.738 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:11.738 [2024-04-27 00:51:02.278732] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:11.738 00:51:02 -- common/autotest_common.sh@960 -- # wait 1699080 00:17:11.738 00:51:02 -- target/tls.sh@215 -- # killprocess 1698838 00:17:11.738 00:51:02 -- common/autotest_common.sh@936 -- # '[' -z 1698838 ']' 00:17:11.738 00:51:02 -- common/autotest_common.sh@940 -- # kill -0 1698838 00:17:11.738 00:51:02 -- common/autotest_common.sh@941 -- # uname 00:17:11.738 00:51:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:11.738 00:51:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1698838 00:17:11.738 00:51:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:11.738 00:51:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:11.738 00:51:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1698838' 00:17:11.738 killing process with pid 1698838 00:17:11.738 00:51:02 -- common/autotest_common.sh@955 -- # kill 1698838 00:17:11.738 [2024-04-27 00:51:02.530195] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:11.738 00:51:02 -- common/autotest_common.sh@960 -- # wait 1698838 00:17:11.738 00:51:02 -- target/tls.sh@218 -- # nvmfappstart 00:17:11.738 00:51:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:11.738 00:51:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:11.738 00:51:02 -- common/autotest_common.sh@10 -- # set +x 00:17:11.738 00:51:02 -- nvmf/common.sh@470 -- # nvmfpid=1701051 00:17:11.738 00:51:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:11.738 00:51:02 -- nvmf/common.sh@471 -- # waitforlisten 1701051 00:17:11.738 00:51:02 -- common/autotest_common.sh@817 -- # '[' -z 1701051 ']' 00:17:11.738 00:51:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.738 00:51:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:11.738 00:51:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.738 00:51:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:11.738 00:51:02 -- common/autotest_common.sh@10 -- # set +x 00:17:11.738 [2024-04-27 00:51:02.799323] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:11.738 [2024-04-27 00:51:02.799369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.738 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.738 [2024-04-27 00:51:02.857039] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.738 [2024-04-27 00:51:02.931948] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.738 [2024-04-27 00:51:02.931986] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.738 [2024-04-27 00:51:02.931993] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.738 [2024-04-27 00:51:02.931999] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.738 [2024-04-27 00:51:02.932005] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.738 [2024-04-27 00:51:02.932020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.738 00:51:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:11.738 00:51:03 -- common/autotest_common.sh@850 -- # return 0 00:17:11.738 00:51:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:11.738 00:51:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:11.738 00:51:03 -- common/autotest_common.sh@10 -- # set +x 00:17:11.738 00:51:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.738 00:51:03 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Y5HQJPkhYG 00:17:11.738 00:51:03 -- target/tls.sh@49 -- # local key=/tmp/tmp.Y5HQJPkhYG 00:17:11.738 00:51:03 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:11.738 [2024-04-27 00:51:03.792078] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.738 00:51:03 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:11.738 00:51:03 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:11.738 [2024-04-27 00:51:04.140966] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:11.738 [2024-04-27 00:51:04.141143] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.738 00:51:04 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:11.738 malloc0 00:17:11.738 00:51:04 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:11.998 00:51:04 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y5HQJPkhYG 00:17:11.998 [2024-04-27 00:51:04.638429] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:11.998 00:51:04 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:11.998 00:51:04 -- target/tls.sh@222 -- # bdevperf_pid=1701341 00:17:11.998 00:51:04 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:11.998 00:51:04 -- target/tls.sh@225 -- # waitforlisten 1701341 /var/tmp/bdevperf.sock 00:17:11.998 00:51:04 -- common/autotest_common.sh@817 -- # '[' -z 1701341 ']' 00:17:11.998 00:51:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.998 00:51:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:11.998 00:51:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.998 00:51:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:11.998 00:51:04 -- common/autotest_common.sh@10 -- # set +x 00:17:11.998 [2024-04-27 00:51:04.684800] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:11.998 [2024-04-27 00:51:04.684849] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1701341 ] 00:17:12.257 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.257 [2024-04-27 00:51:04.738510] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.257 [2024-04-27 00:51:04.813263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.257 00:51:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:12.257 00:51:04 -- common/autotest_common.sh@850 -- # return 0 00:17:12.257 00:51:04 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Y5HQJPkhYG 00:17:12.516 00:51:05 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:12.775 [2024-04-27 00:51:05.223737] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:12.775 nvme0n1 00:17:12.775 00:51:05 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.775 Running I/O for 1 seconds... 00:17:14.168 00:17:14.168 Latency(us) 00:17:14.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.168 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:14.168 Verification LBA range: start 0x0 length 0x2000 00:17:14.168 nvme0n1 : 1.06 1321.82 5.16 0.00 0.00 94623.69 5271.37 138594.39 00:17:14.168 =================================================================================================================== 00:17:14.168 Total : 1321.82 5.16 0.00 0.00 94623.69 5271.37 138594.39 00:17:14.168 0 00:17:14.168 00:51:06 -- target/tls.sh@234 -- # killprocess 1701341 00:17:14.168 00:51:06 -- common/autotest_common.sh@936 -- # '[' -z 1701341 ']' 00:17:14.168 00:51:06 -- common/autotest_common.sh@940 -- # kill -0 1701341 00:17:14.168 00:51:06 -- common/autotest_common.sh@941 -- # uname 00:17:14.168 00:51:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.168 00:51:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1701341 00:17:14.168 00:51:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:14.168 00:51:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:14.168 00:51:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1701341' 00:17:14.168 killing process with pid 1701341 00:17:14.168 00:51:06 -- common/autotest_common.sh@955 -- # kill 1701341 00:17:14.168 Received shutdown signal, test time was about 1.000000 seconds 00:17:14.168 00:17:14.168 Latency(us) 00:17:14.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.168 =================================================================================================================== 00:17:14.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:14.168 00:51:06 -- common/autotest_common.sh@960 -- # wait 1701341 00:17:14.168 00:51:06 -- target/tls.sh@235 -- # killprocess 1701051 00:17:14.168 00:51:06 -- common/autotest_common.sh@936 -- # '[' -z 1701051 ']' 00:17:14.168 00:51:06 -- common/autotest_common.sh@940 -- # kill -0 1701051 00:17:14.168 00:51:06 -- common/autotest_common.sh@941 -- # uname 00:17:14.168 00:51:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.168 00:51:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1701051 00:17:14.168 00:51:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:14.168 00:51:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:14.168 00:51:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1701051' 00:17:14.168 killing process with pid 1701051 00:17:14.168 00:51:06 -- common/autotest_common.sh@955 -- # kill 1701051 00:17:14.168 [2024-04-27 00:51:06.774087] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:14.168 00:51:06 -- common/autotest_common.sh@960 -- # wait 1701051 00:17:14.448 00:51:06 -- target/tls.sh@238 -- # nvmfappstart 00:17:14.448 00:51:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:14.448 00:51:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:14.448 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:17:14.448 00:51:06 -- nvmf/common.sh@470 -- # nvmfpid=1702163 00:17:14.448 00:51:06 -- nvmf/common.sh@471 -- # waitforlisten 1702163 00:17:14.448 00:51:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:14.448 00:51:06 -- common/autotest_common.sh@817 -- # '[' -z 1702163 ']' 00:17:14.448 00:51:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.448 00:51:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:14.448 00:51:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.448 00:51:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:14.448 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:17:14.448 [2024-04-27 00:51:07.044644] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:14.448 [2024-04-27 00:51:07.044691] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.448 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.448 [2024-04-27 00:51:07.099974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.707 [2024-04-27 00:51:07.177443] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.707 [2024-04-27 00:51:07.177477] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.707 [2024-04-27 00:51:07.177484] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.707 [2024-04-27 00:51:07.177491] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.707 [2024-04-27 00:51:07.177496] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.707 [2024-04-27 00:51:07.177509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.274 00:51:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:15.274 00:51:07 -- common/autotest_common.sh@850 -- # return 0 00:17:15.274 00:51:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:15.274 00:51:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:15.274 00:51:07 -- common/autotest_common.sh@10 -- # set +x 00:17:15.274 00:51:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.274 00:51:07 -- target/tls.sh@239 -- # rpc_cmd 00:17:15.274 00:51:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.274 00:51:07 -- common/autotest_common.sh@10 -- # set +x 00:17:15.274 [2024-04-27 00:51:07.895448] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.274 malloc0 00:17:15.274 [2024-04-27 00:51:07.923704] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:15.274 [2024-04-27 00:51:07.923889] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.274 00:51:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.274 00:51:07 -- target/tls.sh@252 -- # bdevperf_pid=1702417 00:17:15.274 00:51:07 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:15.274 00:51:07 -- target/tls.sh@254 -- # waitforlisten 1702417 /var/tmp/bdevperf.sock 00:17:15.274 00:51:07 -- common/autotest_common.sh@817 -- # '[' -z 1702417 ']' 00:17:15.274 00:51:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:15.274 00:51:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:15.274 00:51:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:15.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:15.274 00:51:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:15.274 00:51:07 -- common/autotest_common.sh@10 -- # set +x 00:17:15.532 [2024-04-27 00:51:07.994388] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:15.533 [2024-04-27 00:51:07.994429] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1702417 ] 00:17:15.533 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.533 [2024-04-27 00:51:08.048041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.533 [2024-04-27 00:51:08.118067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.099 00:51:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.099 00:51:08 -- common/autotest_common.sh@850 -- # return 0 00:17:16.099 00:51:08 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Y5HQJPkhYG 00:17:16.358 00:51:08 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:16.616 [2024-04-27 00:51:09.113606] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:16.616 nvme0n1 00:17:16.616 00:51:09 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:16.616 Running I/O for 1 seconds... 00:17:17.989 00:17:17.989 Latency(us) 00:17:17.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.990 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:17.990 Verification LBA range: start 0x0 length 0x2000 00:17:17.990 nvme0n1 : 1.08 1186.12 4.63 0.00 0.00 105019.20 7180.47 147712.45 00:17:17.990 =================================================================================================================== 00:17:17.990 Total : 1186.12 4.63 0.00 0.00 105019.20 7180.47 147712.45 00:17:17.990 0 00:17:17.990 00:51:10 -- target/tls.sh@263 -- # rpc_cmd save_config 00:17:17.990 00:51:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.990 00:51:10 -- common/autotest_common.sh@10 -- # set +x 00:17:17.990 00:51:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.990 00:51:10 -- target/tls.sh@263 -- # tgtcfg='{ 00:17:17.990 "subsystems": [ 00:17:17.990 { 00:17:17.990 "subsystem": "keyring", 00:17:17.990 "config": [ 00:17:17.990 { 00:17:17.990 "method": "keyring_file_add_key", 00:17:17.990 "params": { 00:17:17.990 "name": "key0", 00:17:17.990 "path": "/tmp/tmp.Y5HQJPkhYG" 00:17:17.990 } 00:17:17.990 } 00:17:17.990 ] 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "subsystem": "iobuf", 00:17:17.990 "config": [ 00:17:17.990 { 00:17:17.990 "method": "iobuf_set_options", 00:17:17.990 "params": { 00:17:17.990 "small_pool_count": 8192, 00:17:17.990 "large_pool_count": 1024, 00:17:17.990 "small_bufsize": 8192, 00:17:17.990 "large_bufsize": 135168 00:17:17.990 } 00:17:17.990 } 00:17:17.990 ] 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "subsystem": "sock", 00:17:17.990 "config": [ 00:17:17.990 { 00:17:17.990 "method": "sock_impl_set_options", 00:17:17.990 "params": { 00:17:17.990 "impl_name": "posix", 00:17:17.990 "recv_buf_size": 2097152, 00:17:17.990 "send_buf_size": 2097152, 00:17:17.990 "enable_recv_pipe": true, 00:17:17.990 "enable_quickack": false, 00:17:17.990 "enable_placement_id": 0, 00:17:17.990 "enable_zerocopy_send_server": true, 00:17:17.990 "enable_zerocopy_send_client": false, 00:17:17.990 "zerocopy_threshold": 0, 00:17:17.990 "tls_version": 0, 00:17:17.990 "enable_ktls": false 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "sock_impl_set_options", 00:17:17.990 "params": { 00:17:17.990 "impl_name": "ssl", 00:17:17.990 "recv_buf_size": 4096, 00:17:17.990 "send_buf_size": 4096, 00:17:17.990 "enable_recv_pipe": true, 00:17:17.990 "enable_quickack": false, 00:17:17.990 "enable_placement_id": 0, 00:17:17.990 "enable_zerocopy_send_server": true, 00:17:17.990 "enable_zerocopy_send_client": false, 00:17:17.990 "zerocopy_threshold": 0, 00:17:17.990 "tls_version": 0, 00:17:17.990 "enable_ktls": false 00:17:17.990 } 00:17:17.990 } 00:17:17.990 ] 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "subsystem": "vmd", 00:17:17.990 "config": [] 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "subsystem": "accel", 00:17:17.990 "config": [ 00:17:17.990 { 00:17:17.990 "method": "accel_set_options", 00:17:17.990 "params": { 00:17:17.990 "small_cache_size": 128, 00:17:17.990 "large_cache_size": 16, 00:17:17.990 "task_count": 2048, 00:17:17.990 "sequence_count": 2048, 00:17:17.990 "buf_count": 2048 00:17:17.990 } 00:17:17.990 } 00:17:17.990 ] 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "subsystem": "bdev", 00:17:17.990 "config": [ 00:17:17.990 { 00:17:17.990 "method": "bdev_set_options", 00:17:17.990 "params": { 00:17:17.990 "bdev_io_pool_size": 65535, 00:17:17.990 "bdev_io_cache_size": 256, 00:17:17.990 "bdev_auto_examine": true, 00:17:17.990 "iobuf_small_cache_size": 128, 00:17:17.990 "iobuf_large_cache_size": 16 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "bdev_raid_set_options", 00:17:17.990 "params": { 00:17:17.990 "process_window_size_kb": 1024 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "bdev_iscsi_set_options", 00:17:17.990 "params": { 00:17:17.990 "timeout_sec": 30 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "bdev_nvme_set_options", 00:17:17.990 "params": { 00:17:17.990 "action_on_timeout": "none", 00:17:17.990 "timeout_us": 0, 00:17:17.990 "timeout_admin_us": 0, 00:17:17.990 "keep_alive_timeout_ms": 10000, 00:17:17.990 "arbitration_burst": 0, 00:17:17.990 "low_priority_weight": 0, 00:17:17.990 "medium_priority_weight": 0, 00:17:17.990 "high_priority_weight": 0, 00:17:17.990 "nvme_adminq_poll_period_us": 10000, 00:17:17.990 "nvme_ioq_poll_period_us": 0, 00:17:17.990 "io_queue_requests": 0, 00:17:17.990 "delay_cmd_submit": true, 00:17:17.990 "transport_retry_count": 4, 00:17:17.990 "bdev_retry_count": 3, 00:17:17.990 "transport_ack_timeout": 0, 00:17:17.990 "ctrlr_loss_timeout_sec": 0, 00:17:17.990 "reconnect_delay_sec": 0, 00:17:17.990 "fast_io_fail_timeout_sec": 0, 00:17:17.990 "disable_auto_failback": false, 00:17:17.990 "generate_uuids": false, 00:17:17.990 "transport_tos": 0, 00:17:17.990 "nvme_error_stat": false, 00:17:17.990 "rdma_srq_size": 0, 00:17:17.990 "io_path_stat": false, 00:17:17.990 "allow_accel_sequence": false, 00:17:17.990 "rdma_max_cq_size": 0, 00:17:17.990 "rdma_cm_event_timeout_ms": 0, 00:17:17.990 "dhchap_digests": [ 00:17:17.990 "sha256", 00:17:17.990 "sha384", 00:17:17.990 "sha512" 00:17:17.990 ], 00:17:17.990 "dhchap_dhgroups": [ 00:17:17.990 "null", 00:17:17.990 "ffdhe2048", 00:17:17.990 "ffdhe3072", 00:17:17.990 "ffdhe4096", 00:17:17.990 "ffdhe6144", 00:17:17.990 "ffdhe8192" 00:17:17.990 ] 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "bdev_nvme_set_hotplug", 00:17:17.990 "params": { 00:17:17.990 "period_us": 100000, 00:17:17.990 "enable": false 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "bdev_malloc_create", 00:17:17.990 "params": { 00:17:17.990 "name": "malloc0", 00:17:17.990 "num_blocks": 8192, 00:17:17.990 "block_size": 4096, 00:17:17.990 "physical_block_size": 4096, 00:17:17.990 "uuid": "41091f57-3c7e-4f0f-97a4-acd044f89527", 00:17:17.990 "optimal_io_boundary": 0 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "bdev_wait_for_examine" 00:17:17.990 } 00:17:17.990 ] 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "subsystem": "nbd", 00:17:17.990 "config": [] 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "subsystem": "scheduler", 00:17:17.990 "config": [ 00:17:17.990 { 00:17:17.990 "method": "framework_set_scheduler", 00:17:17.990 "params": { 00:17:17.990 "name": "static" 00:17:17.990 } 00:17:17.990 } 00:17:17.990 ] 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "subsystem": "nvmf", 00:17:17.990 "config": [ 00:17:17.990 { 00:17:17.990 "method": "nvmf_set_config", 00:17:17.990 "params": { 00:17:17.990 "discovery_filter": "match_any", 00:17:17.990 "admin_cmd_passthru": { 00:17:17.990 "identify_ctrlr": false 00:17:17.990 } 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "nvmf_set_max_subsystems", 00:17:17.990 "params": { 00:17:17.990 "max_subsystems": 1024 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "nvmf_set_crdt", 00:17:17.990 "params": { 00:17:17.990 "crdt1": 0, 00:17:17.990 "crdt2": 0, 00:17:17.990 "crdt3": 0 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "nvmf_create_transport", 00:17:17.990 "params": { 00:17:17.990 "trtype": "TCP", 00:17:17.990 "max_queue_depth": 128, 00:17:17.990 "max_io_qpairs_per_ctrlr": 127, 00:17:17.990 "in_capsule_data_size": 4096, 00:17:17.990 "max_io_size": 131072, 00:17:17.990 "io_unit_size": 131072, 00:17:17.990 "max_aq_depth": 128, 00:17:17.990 "num_shared_buffers": 511, 00:17:17.990 "buf_cache_size": 4294967295, 00:17:17.990 "dif_insert_or_strip": false, 00:17:17.990 "zcopy": false, 00:17:17.990 "c2h_success": false, 00:17:17.990 "sock_priority": 0, 00:17:17.990 "abort_timeout_sec": 1, 00:17:17.990 "ack_timeout": 0, 00:17:17.990 "data_wr_pool_size": 0 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "nvmf_create_subsystem", 00:17:17.990 "params": { 00:17:17.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.990 "allow_any_host": false, 00:17:17.990 "serial_number": "00000000000000000000", 00:17:17.990 "model_number": "SPDK bdev Controller", 00:17:17.990 "max_namespaces": 32, 00:17:17.990 "min_cntlid": 1, 00:17:17.990 "max_cntlid": 65519, 00:17:17.990 "ana_reporting": false 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "nvmf_subsystem_add_host", 00:17:17.990 "params": { 00:17:17.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.990 "host": "nqn.2016-06.io.spdk:host1", 00:17:17.990 "psk": "key0" 00:17:17.990 } 00:17:17.990 }, 00:17:17.990 { 00:17:17.990 "method": "nvmf_subsystem_add_ns", 00:17:17.990 "params": { 00:17:17.990 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.990 "namespace": { 00:17:17.990 "nsid": 1, 00:17:17.990 "bdev_name": "malloc0", 00:17:17.990 "nguid": "41091F573C7E4F0F97A4ACD044F89527", 00:17:17.991 "uuid": "41091f57-3c7e-4f0f-97a4-acd044f89527", 00:17:17.991 "no_auto_visible": false 00:17:17.991 } 00:17:17.991 } 00:17:17.991 }, 00:17:17.991 { 00:17:17.991 "method": "nvmf_subsystem_add_listener", 00:17:17.991 "params": { 00:17:17.991 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.991 "listen_address": { 00:17:17.991 "trtype": "TCP", 00:17:17.991 "adrfam": "IPv4", 00:17:17.991 "traddr": "10.0.0.2", 00:17:17.991 "trsvcid": "4420" 00:17:17.991 }, 00:17:17.991 "secure_channel": true 00:17:17.991 } 00:17:17.991 } 00:17:17.991 ] 00:17:17.991 } 00:17:17.991 ] 00:17:17.991 }' 00:17:17.991 00:51:10 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:18.250 00:51:10 -- target/tls.sh@264 -- # bperfcfg='{ 00:17:18.250 "subsystems": [ 00:17:18.250 { 00:17:18.250 "subsystem": "keyring", 00:17:18.250 "config": [ 00:17:18.250 { 00:17:18.250 "method": "keyring_file_add_key", 00:17:18.250 "params": { 00:17:18.250 "name": "key0", 00:17:18.250 "path": "/tmp/tmp.Y5HQJPkhYG" 00:17:18.250 } 00:17:18.250 } 00:17:18.250 ] 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "subsystem": "iobuf", 00:17:18.250 "config": [ 00:17:18.250 { 00:17:18.250 "method": "iobuf_set_options", 00:17:18.250 "params": { 00:17:18.250 "small_pool_count": 8192, 00:17:18.250 "large_pool_count": 1024, 00:17:18.250 "small_bufsize": 8192, 00:17:18.250 "large_bufsize": 135168 00:17:18.250 } 00:17:18.250 } 00:17:18.250 ] 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "subsystem": "sock", 00:17:18.250 "config": [ 00:17:18.250 { 00:17:18.250 "method": "sock_impl_set_options", 00:17:18.250 "params": { 00:17:18.250 "impl_name": "posix", 00:17:18.250 "recv_buf_size": 2097152, 00:17:18.250 "send_buf_size": 2097152, 00:17:18.250 "enable_recv_pipe": true, 00:17:18.250 "enable_quickack": false, 00:17:18.250 "enable_placement_id": 0, 00:17:18.250 "enable_zerocopy_send_server": true, 00:17:18.250 "enable_zerocopy_send_client": false, 00:17:18.250 "zerocopy_threshold": 0, 00:17:18.250 "tls_version": 0, 00:17:18.250 "enable_ktls": false 00:17:18.250 } 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "method": "sock_impl_set_options", 00:17:18.250 "params": { 00:17:18.250 "impl_name": "ssl", 00:17:18.250 "recv_buf_size": 4096, 00:17:18.250 "send_buf_size": 4096, 00:17:18.250 "enable_recv_pipe": true, 00:17:18.250 "enable_quickack": false, 00:17:18.250 "enable_placement_id": 0, 00:17:18.250 "enable_zerocopy_send_server": true, 00:17:18.250 "enable_zerocopy_send_client": false, 00:17:18.250 "zerocopy_threshold": 0, 00:17:18.250 "tls_version": 0, 00:17:18.250 "enable_ktls": false 00:17:18.250 } 00:17:18.250 } 00:17:18.250 ] 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "subsystem": "vmd", 00:17:18.250 "config": [] 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "subsystem": "accel", 00:17:18.250 "config": [ 00:17:18.250 { 00:17:18.250 "method": "accel_set_options", 00:17:18.250 "params": { 00:17:18.250 "small_cache_size": 128, 00:17:18.250 "large_cache_size": 16, 00:17:18.250 "task_count": 2048, 00:17:18.250 "sequence_count": 2048, 00:17:18.250 "buf_count": 2048 00:17:18.250 } 00:17:18.250 } 00:17:18.250 ] 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "subsystem": "bdev", 00:17:18.250 "config": [ 00:17:18.250 { 00:17:18.250 "method": "bdev_set_options", 00:17:18.250 "params": { 00:17:18.250 "bdev_io_pool_size": 65535, 00:17:18.250 "bdev_io_cache_size": 256, 00:17:18.250 "bdev_auto_examine": true, 00:17:18.250 "iobuf_small_cache_size": 128, 00:17:18.250 "iobuf_large_cache_size": 16 00:17:18.250 } 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "method": "bdev_raid_set_options", 00:17:18.250 "params": { 00:17:18.250 "process_window_size_kb": 1024 00:17:18.250 } 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "method": "bdev_iscsi_set_options", 00:17:18.250 "params": { 00:17:18.250 "timeout_sec": 30 00:17:18.250 } 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "method": "bdev_nvme_set_options", 00:17:18.250 "params": { 00:17:18.250 "action_on_timeout": "none", 00:17:18.250 "timeout_us": 0, 00:17:18.250 "timeout_admin_us": 0, 00:17:18.250 "keep_alive_timeout_ms": 10000, 00:17:18.250 "arbitration_burst": 0, 00:17:18.250 "low_priority_weight": 0, 00:17:18.250 "medium_priority_weight": 0, 00:17:18.250 "high_priority_weight": 0, 00:17:18.250 "nvme_adminq_poll_period_us": 10000, 00:17:18.250 "nvme_ioq_poll_period_us": 0, 00:17:18.250 "io_queue_requests": 512, 00:17:18.250 "delay_cmd_submit": true, 00:17:18.250 "transport_retry_count": 4, 00:17:18.250 "bdev_retry_count": 3, 00:17:18.250 "transport_ack_timeout": 0, 00:17:18.250 "ctrlr_loss_timeout_sec": 0, 00:17:18.250 "reconnect_delay_sec": 0, 00:17:18.250 "fast_io_fail_timeout_sec": 0, 00:17:18.250 "disable_auto_failback": false, 00:17:18.250 "generate_uuids": false, 00:17:18.250 "transport_tos": 0, 00:17:18.250 "nvme_error_stat": false, 00:17:18.250 "rdma_srq_size": 0, 00:17:18.250 "io_path_stat": false, 00:17:18.250 "allow_accel_sequence": false, 00:17:18.250 "rdma_max_cq_size": 0, 00:17:18.250 "rdma_cm_event_timeout_ms": 0, 00:17:18.250 "dhchap_digests": [ 00:17:18.250 "sha256", 00:17:18.250 "sha384", 00:17:18.250 "sha512" 00:17:18.250 ], 00:17:18.250 "dhchap_dhgroups": [ 00:17:18.250 "null", 00:17:18.250 "ffdhe2048", 00:17:18.250 "ffdhe3072", 00:17:18.250 "ffdhe4096", 00:17:18.250 "ffdhe6144", 00:17:18.250 "ffdhe8192" 00:17:18.250 ] 00:17:18.250 } 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "method": "bdev_nvme_attach_controller", 00:17:18.250 "params": { 00:17:18.250 "name": "nvme0", 00:17:18.250 "trtype": "TCP", 00:17:18.250 "adrfam": "IPv4", 00:17:18.250 "traddr": "10.0.0.2", 00:17:18.250 "trsvcid": "4420", 00:17:18.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.250 "prchk_reftag": false, 00:17:18.250 "prchk_guard": false, 00:17:18.250 "ctrlr_loss_timeout_sec": 0, 00:17:18.250 "reconnect_delay_sec": 0, 00:17:18.250 "fast_io_fail_timeout_sec": 0, 00:17:18.250 "psk": "key0", 00:17:18.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:18.250 "hdgst": false, 00:17:18.250 "ddgst": false 00:17:18.250 } 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "method": "bdev_nvme_set_hotplug", 00:17:18.250 "params": { 00:17:18.250 "period_us": 100000, 00:17:18.250 "enable": false 00:17:18.250 } 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "method": "bdev_enable_histogram", 00:17:18.250 "params": { 00:17:18.250 "name": "nvme0n1", 00:17:18.250 "enable": true 00:17:18.250 } 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "method": "bdev_wait_for_examine" 00:17:18.250 } 00:17:18.250 ] 00:17:18.250 }, 00:17:18.250 { 00:17:18.250 "subsystem": "nbd", 00:17:18.250 "config": [] 00:17:18.250 } 00:17:18.250 ] 00:17:18.250 }' 00:17:18.250 00:51:10 -- target/tls.sh@266 -- # killprocess 1702417 00:17:18.250 00:51:10 -- common/autotest_common.sh@936 -- # '[' -z 1702417 ']' 00:17:18.250 00:51:10 -- common/autotest_common.sh@940 -- # kill -0 1702417 00:17:18.250 00:51:10 -- common/autotest_common.sh@941 -- # uname 00:17:18.250 00:51:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.250 00:51:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1702417 00:17:18.250 00:51:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:18.250 00:51:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:18.250 00:51:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1702417' 00:17:18.250 killing process with pid 1702417 00:17:18.250 00:51:10 -- common/autotest_common.sh@955 -- # kill 1702417 00:17:18.250 Received shutdown signal, test time was about 1.000000 seconds 00:17:18.250 00:17:18.250 Latency(us) 00:17:18.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.250 =================================================================================================================== 00:17:18.250 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.250 00:51:10 -- common/autotest_common.sh@960 -- # wait 1702417 00:17:18.510 00:51:11 -- target/tls.sh@267 -- # killprocess 1702163 00:17:18.510 00:51:11 -- common/autotest_common.sh@936 -- # '[' -z 1702163 ']' 00:17:18.510 00:51:11 -- common/autotest_common.sh@940 -- # kill -0 1702163 00:17:18.510 00:51:11 -- common/autotest_common.sh@941 -- # uname 00:17:18.510 00:51:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:18.510 00:51:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1702163 00:17:18.510 00:51:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:18.510 00:51:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:18.510 00:51:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1702163' 00:17:18.510 killing process with pid 1702163 00:17:18.510 00:51:11 -- common/autotest_common.sh@955 -- # kill 1702163 00:17:18.510 00:51:11 -- common/autotest_common.sh@960 -- # wait 1702163 00:17:18.770 00:51:11 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:17:18.770 00:51:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:18.770 00:51:11 -- target/tls.sh@269 -- # echo '{ 00:17:18.770 "subsystems": [ 00:17:18.770 { 00:17:18.770 "subsystem": "keyring", 00:17:18.770 "config": [ 00:17:18.770 { 00:17:18.770 "method": "keyring_file_add_key", 00:17:18.770 "params": { 00:17:18.770 "name": "key0", 00:17:18.770 "path": "/tmp/tmp.Y5HQJPkhYG" 00:17:18.770 } 00:17:18.770 } 00:17:18.770 ] 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "subsystem": "iobuf", 00:17:18.770 "config": [ 00:17:18.770 { 00:17:18.770 "method": "iobuf_set_options", 00:17:18.770 "params": { 00:17:18.770 "small_pool_count": 8192, 00:17:18.770 "large_pool_count": 1024, 00:17:18.770 "small_bufsize": 8192, 00:17:18.770 "large_bufsize": 135168 00:17:18.770 } 00:17:18.770 } 00:17:18.770 ] 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "subsystem": "sock", 00:17:18.770 "config": [ 00:17:18.770 { 00:17:18.770 "method": "sock_impl_set_options", 00:17:18.770 "params": { 00:17:18.770 "impl_name": "posix", 00:17:18.770 "recv_buf_size": 2097152, 00:17:18.770 "send_buf_size": 2097152, 00:17:18.770 "enable_recv_pipe": true, 00:17:18.770 "enable_quickack": false, 00:17:18.770 "enable_placement_id": 0, 00:17:18.770 "enable_zerocopy_send_server": true, 00:17:18.770 "enable_zerocopy_send_client": false, 00:17:18.770 "zerocopy_threshold": 0, 00:17:18.770 "tls_version": 0, 00:17:18.770 "enable_ktls": false 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "sock_impl_set_options", 00:17:18.770 "params": { 00:17:18.770 "impl_name": "ssl", 00:17:18.770 "recv_buf_size": 4096, 00:17:18.770 "send_buf_size": 4096, 00:17:18.770 "enable_recv_pipe": true, 00:17:18.770 "enable_quickack": false, 00:17:18.770 "enable_placement_id": 0, 00:17:18.770 "enable_zerocopy_send_server": true, 00:17:18.770 "enable_zerocopy_send_client": false, 00:17:18.770 "zerocopy_threshold": 0, 00:17:18.770 "tls_version": 0, 00:17:18.770 "enable_ktls": false 00:17:18.770 } 00:17:18.770 } 00:17:18.770 ] 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "subsystem": "vmd", 00:17:18.770 "config": [] 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "subsystem": "accel", 00:17:18.770 "config": [ 00:17:18.770 { 00:17:18.770 "method": "accel_set_options", 00:17:18.770 "params": { 00:17:18.770 "small_cache_size": 128, 00:17:18.770 "large_cache_size": 16, 00:17:18.770 "task_count": 2048, 00:17:18.770 "sequence_count": 2048, 00:17:18.770 "buf_count": 2048 00:17:18.770 } 00:17:18.770 } 00:17:18.770 ] 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "subsystem": "bdev", 00:17:18.770 "config": [ 00:17:18.770 { 00:17:18.770 "method": "bdev_set_options", 00:17:18.770 "params": { 00:17:18.770 "bdev_io_pool_size": 65535, 00:17:18.770 "bdev_io_cache_size": 256, 00:17:18.770 "bdev_auto_examine": true, 00:17:18.770 "iobuf_small_cache_size": 128, 00:17:18.770 "iobuf_large_cache_size": 16 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "bdev_raid_set_options", 00:17:18.770 "params": { 00:17:18.770 "process_window_size_kb": 1024 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "bdev_iscsi_set_options", 00:17:18.770 "params": { 00:17:18.770 "timeout_sec": 30 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "bdev_nvme_set_options", 00:17:18.770 "params": { 00:17:18.770 "action_on_timeout": "none", 00:17:18.770 "timeout_us": 0, 00:17:18.770 "timeout_admin_us": 0, 00:17:18.770 "keep_alive_timeout_ms": 10000, 00:17:18.770 "arbitration_burst": 0, 00:17:18.770 "low_priority_weight": 0, 00:17:18.770 "medium_priority_weight": 0, 00:17:18.770 "high_priority_weight": 0, 00:17:18.770 "nvme_adminq_poll_period_us": 10000, 00:17:18.770 "nvme_ioq_poll_period_us": 0, 00:17:18.770 "io_queue_requests": 0, 00:17:18.770 "delay_cmd_submit": true, 00:17:18.770 "transport_retry_count": 4, 00:17:18.770 "bdev_retry_count": 3, 00:17:18.770 "transport_ack_timeout": 0, 00:17:18.770 "ctrlr_loss_timeout_sec": 0, 00:17:18.770 "reconnect_delay_sec": 0, 00:17:18.770 "fast_io_fail_timeout_sec": 0, 00:17:18.770 "disable_auto_failback": false, 00:17:18.770 "generate_uuids": false, 00:17:18.770 "transport_tos": 0, 00:17:18.770 "nvme_error_stat": false, 00:17:18.770 "rdma_srq_size": 0, 00:17:18.770 "io_path_stat": false, 00:17:18.770 "allow_accel_sequence": false, 00:17:18.770 "rdma_max_cq_size": 0, 00:17:18.770 "rdma_cm_event_timeout_ms": 0, 00:17:18.770 "dhchap_digests": [ 00:17:18.770 "sha256", 00:17:18.770 "sha384", 00:17:18.770 "sha512" 00:17:18.770 ], 00:17:18.770 "dhchap_dhgroups": [ 00:17:18.770 "null", 00:17:18.770 "ffdhe2048", 00:17:18.770 "ffdhe3072", 00:17:18.770 "ffdhe4096", 00:17:18.770 "ffdhe6144", 00:17:18.770 "ffdhe8192" 00:17:18.770 ] 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "bdev_nvme_set_hotplug", 00:17:18.770 "params": { 00:17:18.770 "period_us": 100000, 00:17:18.770 "enable": false 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "bdev_malloc_create", 00:17:18.770 "params": { 00:17:18.770 "name": "malloc0", 00:17:18.770 "num_blocks": 8192, 00:17:18.770 "block_size": 4096, 00:17:18.770 "physical_block_size": 4096, 00:17:18.770 "uuid": "41091f57-3c7e-4f0f-97a4-acd044f89527", 00:17:18.770 "optimal_io_boundary": 0 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "bdev_wait_for_examine" 00:17:18.770 } 00:17:18.770 ] 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "subsystem": "nbd", 00:17:18.770 "config": [] 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "subsystem": "scheduler", 00:17:18.770 "config": [ 00:17:18.770 { 00:17:18.770 "method": "framework_set_scheduler", 00:17:18.770 "params": { 00:17:18.770 "name": "static" 00:17:18.770 } 00:17:18.770 } 00:17:18.770 ] 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "subsystem": "nvmf", 00:17:18.770 "config": [ 00:17:18.770 { 00:17:18.770 "method": "nvmf_set_config", 00:17:18.770 "params": { 00:17:18.770 "discovery_filter": "match_any", 00:17:18.770 "admin_cmd_passthru": { 00:17:18.770 "identify_ctrlr": false 00:17:18.770 } 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "nvmf_set_max_subsystems", 00:17:18.770 "params": { 00:17:18.770 "max_subsystems": 1024 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "nvmf_set_crdt", 00:17:18.770 "params": { 00:17:18.770 "crdt1": 0, 00:17:18.770 "crdt2": 0, 00:17:18.770 "crdt3": 0 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "nvmf_create_transport", 00:17:18.770 "params": { 00:17:18.770 "trtype": "TCP", 00:17:18.770 "max_queue_depth": 128, 00:17:18.770 "max_io_qpairs_per_ctrlr": 127, 00:17:18.770 "in_capsule_data_size": 4096, 00:17:18.770 "max_io_size": 131072, 00:17:18.770 "io_unit_size": 131072, 00:17:18.770 "max_aq_depth": 128, 00:17:18.770 "num_shared_buffers": 511, 00:17:18.770 "buf_cache_size": 4294967295, 00:17:18.770 "dif_insert_or_strip": false, 00:17:18.770 "zcopy": false, 00:17:18.770 "c2h_success": false, 00:17:18.770 "sock_priority": 0, 00:17:18.770 "abort_timeout_sec": 1, 00:17:18.770 "ack_timeout": 0, 00:17:18.770 "data_wr_pool_size": 0 00:17:18.770 } 00:17:18.770 }, 00:17:18.770 { 00:17:18.770 "method": "nvmf_create_subsystem", 00:17:18.770 "params": { 00:17:18.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.771 "allow_any_host": false, 00:17:18.771 "serial_number": "00000000000000000000", 00:17:18.771 "model_number": "SPDK bdev 00:51:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:18.771 Controller", 00:17:18.771 "max_namespaces": 32, 00:17:18.771 "min_cntlid": 1, 00:17:18.771 "max_cntlid": 65519, 00:17:18.771 "ana_reporting": false 00:17:18.771 } 00:17:18.771 }, 00:17:18.771 { 00:17:18.771 "method": "nvmf_subsystem_add_host", 00:17:18.771 "params": { 00:17:18.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.771 "host": "nqn.2016-06.io.spdk:host1", 00:17:18.771 "psk": "key0" 00:17:18.771 } 00:17:18.771 }, 00:17:18.771 { 00:17:18.771 "method": "nvmf_subsystem_add_ns", 00:17:18.771 "params": { 00:17:18.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.771 "namespace": { 00:17:18.771 "nsid": 1, 00:17:18.771 "bdev_name": "malloc0", 00:17:18.771 "nguid": "41091F573C7E4F0F97A4ACD044F89527", 00:17:18.771 "uuid": "41091f57-3c7e-4f0f-97a4-acd044f89527", 00:17:18.771 "no_auto_visible": false 00:17:18.771 } 00:17:18.771 } 00:17:18.771 }, 00:17:18.771 { 00:17:18.771 "method": "nvmf_subsystem_add_listener", 00:17:18.771 "params": { 00:17:18.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.771 "listen_address": { 00:17:18.771 "trtype": "TCP", 00:17:18.771 "adrfam": "IPv4", 00:17:18.771 "traddr": "10.0.0.2", 00:17:18.771 "trsvcid": "4420" 00:17:18.771 }, 00:17:18.771 "secure_channel": true 00:17:18.771 } 00:17:18.771 } 00:17:18.771 ] 00:17:18.771 } 00:17:18.771 ] 00:17:18.771 }' 00:17:18.771 00:51:11 -- common/autotest_common.sh@10 -- # set +x 00:17:18.771 00:51:11 -- nvmf/common.sh@470 -- # nvmfpid=1702901 00:17:18.771 00:51:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:18.771 00:51:11 -- nvmf/common.sh@471 -- # waitforlisten 1702901 00:17:18.771 00:51:11 -- common/autotest_common.sh@817 -- # '[' -z 1702901 ']' 00:17:18.771 00:51:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.771 00:51:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:18.771 00:51:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.771 00:51:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:18.771 00:51:11 -- common/autotest_common.sh@10 -- # set +x 00:17:18.771 [2024-04-27 00:51:11.324831] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:18.771 [2024-04-27 00:51:11.324877] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.771 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.771 [2024-04-27 00:51:11.380676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.771 [2024-04-27 00:51:11.445896] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.771 [2024-04-27 00:51:11.445936] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.771 [2024-04-27 00:51:11.445943] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.771 [2024-04-27 00:51:11.445949] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.771 [2024-04-27 00:51:11.445958] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.771 [2024-04-27 00:51:11.446016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.030 [2024-04-27 00:51:11.649474] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.030 [2024-04-27 00:51:11.681505] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:19.030 [2024-04-27 00:51:11.692368] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.599 00:51:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:19.599 00:51:12 -- common/autotest_common.sh@850 -- # return 0 00:17:19.599 00:51:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:19.599 00:51:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:19.599 00:51:12 -- common/autotest_common.sh@10 -- # set +x 00:17:19.599 00:51:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.599 00:51:12 -- target/tls.sh@272 -- # bdevperf_pid=1703147 00:17:19.599 00:51:12 -- target/tls.sh@273 -- # waitforlisten 1703147 /var/tmp/bdevperf.sock 00:17:19.599 00:51:12 -- common/autotest_common.sh@817 -- # '[' -z 1703147 ']' 00:17:19.599 00:51:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.599 00:51:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:19.599 00:51:12 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:19.599 00:51:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.599 00:51:12 -- target/tls.sh@270 -- # echo '{ 00:17:19.599 "subsystems": [ 00:17:19.599 { 00:17:19.599 "subsystem": "keyring", 00:17:19.599 "config": [ 00:17:19.599 { 00:17:19.599 "method": "keyring_file_add_key", 00:17:19.599 "params": { 00:17:19.599 "name": "key0", 00:17:19.599 "path": "/tmp/tmp.Y5HQJPkhYG" 00:17:19.599 } 00:17:19.599 } 00:17:19.599 ] 00:17:19.599 }, 00:17:19.599 { 00:17:19.599 "subsystem": "iobuf", 00:17:19.599 "config": [ 00:17:19.599 { 00:17:19.599 "method": "iobuf_set_options", 00:17:19.599 "params": { 00:17:19.599 "small_pool_count": 8192, 00:17:19.599 "large_pool_count": 1024, 00:17:19.599 "small_bufsize": 8192, 00:17:19.599 "large_bufsize": 135168 00:17:19.599 } 00:17:19.599 } 00:17:19.599 ] 00:17:19.599 }, 00:17:19.599 { 00:17:19.599 "subsystem": "sock", 00:17:19.599 "config": [ 00:17:19.599 { 00:17:19.599 "method": "sock_impl_set_options", 00:17:19.599 "params": { 00:17:19.599 "impl_name": "posix", 00:17:19.599 "recv_buf_size": 2097152, 00:17:19.599 "send_buf_size": 2097152, 00:17:19.599 "enable_recv_pipe": true, 00:17:19.599 "enable_quickack": false, 00:17:19.599 "enable_placement_id": 0, 00:17:19.599 "enable_zerocopy_send_server": true, 00:17:19.599 "enable_zerocopy_send_client": false, 00:17:19.599 "zerocopy_threshold": 0, 00:17:19.599 "tls_version": 0, 00:17:19.599 "enable_ktls": false 00:17:19.600 } 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "method": "sock_impl_set_options", 00:17:19.600 "params": { 00:17:19.600 "impl_name": "ssl", 00:17:19.600 "recv_buf_size": 4096, 00:17:19.600 "send_buf_size": 4096, 00:17:19.600 "enable_recv_pipe": true, 00:17:19.600 "enable_quickack": false, 00:17:19.600 "enable_placement_id": 0, 00:17:19.600 "enable_zerocopy_send_server": true, 00:17:19.600 "enable_zerocopy_send_client": false, 00:17:19.600 "zerocopy_threshold": 0, 00:17:19.600 "tls_version": 0, 00:17:19.600 "enable_ktls": false 00:17:19.600 } 00:17:19.600 } 00:17:19.600 ] 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "subsystem": "vmd", 00:17:19.600 "config": [] 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "subsystem": "accel", 00:17:19.600 "config": [ 00:17:19.600 { 00:17:19.600 "method": "accel_set_options", 00:17:19.600 "params": { 00:17:19.600 "small_cache_size": 128, 00:17:19.600 "large_cache_size": 16, 00:17:19.600 "task_count": 2048, 00:17:19.600 "sequence_count": 2048, 00:17:19.600 "buf_count": 2048 00:17:19.600 } 00:17:19.600 } 00:17:19.600 ] 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "subsystem": "bdev", 00:17:19.600 "config": [ 00:17:19.600 { 00:17:19.600 "method": "bdev_set_options", 00:17:19.600 "params": { 00:17:19.600 "bdev_io_pool_size": 65535, 00:17:19.600 "bdev_io_cache_size": 256, 00:17:19.600 "bdev_auto_examine": true, 00:17:19.600 "iobuf_small_cache_size": 128, 00:17:19.600 "iobuf_large_cache_size": 16 00:17:19.600 } 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "method": "bdev_raid_set_options", 00:17:19.600 "params": { 00:17:19.600 "process_window_size_kb": 1024 00:17:19.600 } 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "method": "bdev_iscsi_set_options", 00:17:19.600 "params": { 00:17:19.600 "timeout_sec": 30 00:17:19.600 } 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "method": "bdev_nvme_set_options", 00:17:19.600 "params": { 00:17:19.600 "action_on_timeout": "none", 00:17:19.600 "timeout_us": 0, 00:17:19.600 "timeout_admin_us": 0, 00:17:19.600 "keep_alive_timeout_ms": 10000, 00:17:19.600 "arbitration_burst": 0, 00:17:19.600 "low_priority_weight": 0, 00:17:19.600 "medium_priority_weight": 0, 00:17:19.600 "high_priority_weight": 0, 00:17:19.600 "nvme_adminq_poll_period_us": 10000, 00:17:19.600 "nvme_ioq_poll_period_us": 0, 00:17:19.600 "io_queue_requests": 512, 00:17:19.600 "delay_cmd_submit": true, 00:17:19.600 "transport_retry_count": 4, 00:17:19.600 "bdev_retry_count": 3, 00:17:19.600 "transport_ack_timeout": 0, 00:17:19.600 "ctrlr_loss_timeout_sec": 0, 00:17:19.600 "reconnect_delay_sec": 0, 00:17:19.600 "fast_io_fail_timeout_sec": 0, 00:17:19.600 "disable_auto_failback": false, 00:17:19.600 "generate_uuids": false, 00:17:19.600 "transport_tos": 0, 00:17:19.600 "nvme_error_stat": false, 00:17:19.600 "rdma_srq_size": 0, 00:17:19.600 "io_path_stat": false, 00:17:19.600 "allow_accel_sequence": false, 00:17:19.600 "rdma_max_cq_size": 0, 00:17:19.600 "rdma_cm_event_timeout_ms": 0, 00:17:19.600 "dhchap_digests": [ 00:17:19.600 "sha256", 00:17:19.600 "sha384", 00:17:19.600 "sha512" 00:17:19.600 ], 00:17:19.600 "dhchap_dhgroups": [ 00:17:19.600 "null", 00:17:19.600 "ffdhe2048", 00:17:19.600 "ffdhe3072", 00:17:19.600 "ffdhe4096", 00:17:19.600 "ffdhe6144", 00:17:19.600 "ffdhe8192" 00:17:19.600 ] 00:17:19.600 } 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "method": "bdev_nvme_attach_controller", 00:17:19.600 "params": { 00:17:19.600 "name": "nvme0", 00:17:19.600 "trtype": "TCP", 00:17:19.600 "adrfam": "IPv4", 00:17:19.600 "traddr": "10.0.0.2", 00:17:19.600 "trsvcid": "4420", 00:17:19.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.600 "prchk_reftag": false, 00:17:19.600 "prchk_guard": false, 00:17:19.600 "ctrlr_loss_timeout_sec": 0, 00:17:19.600 "reconnect_delay_sec": 0, 00:17:19.600 "fast_io_fail_timeout_sec": 0, 00:17:19.600 "psk": "key0", 00:17:19.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.600 "hdgst": false, 00:17:19.600 "ddgst": false 00:17:19.600 } 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "method": "bdev_nvme_set_hotplug", 00:17:19.600 "params": { 00:17:19.600 "period_us": 100000, 00:17:19.600 "enable": false 00:17:19.600 } 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "method": "bdev_enable_histogram", 00:17:19.600 "params": { 00:17:19.600 "name": "nvme0n1", 00:17:19.600 "enable": true 00:17:19.600 } 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "method": "bdev_wait_for_examine" 00:17:19.600 } 00:17:19.600 ] 00:17:19.600 }, 00:17:19.600 { 00:17:19.600 "subsystem": "nbd", 00:17:19.600 "config": [] 00:17:19.600 } 00:17:19.600 ] 00:17:19.600 }' 00:17:19.600 00:51:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:19.600 00:51:12 -- common/autotest_common.sh@10 -- # set +x 00:17:19.600 [2024-04-27 00:51:12.192381] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:19.600 [2024-04-27 00:51:12.192426] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1703147 ] 00:17:19.600 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.600 [2024-04-27 00:51:12.246805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.860 [2024-04-27 00:51:12.325093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.860 [2024-04-27 00:51:12.467270] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:20.430 00:51:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:20.430 00:51:13 -- common/autotest_common.sh@850 -- # return 0 00:17:20.430 00:51:13 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:20.430 00:51:13 -- target/tls.sh@275 -- # jq -r '.[].name' 00:17:20.691 00:51:13 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.691 00:51:13 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:20.691 Running I/O for 1 seconds... 00:17:22.067 00:17:22.067 Latency(us) 00:17:22.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.067 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:22.067 Verification LBA range: start 0x0 length 0x2000 00:17:22.067 nvme0n1 : 1.07 1325.57 5.18 0.00 0.00 94172.63 5784.26 164124.94 00:17:22.067 =================================================================================================================== 00:17:22.067 Total : 1325.57 5.18 0.00 0.00 94172.63 5784.26 164124.94 00:17:22.067 0 00:17:22.067 00:51:14 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:17:22.067 00:51:14 -- target/tls.sh@279 -- # cleanup 00:17:22.067 00:51:14 -- target/tls.sh@15 -- # process_shm --id 0 00:17:22.067 00:51:14 -- common/autotest_common.sh@794 -- # type=--id 00:17:22.067 00:51:14 -- common/autotest_common.sh@795 -- # id=0 00:17:22.067 00:51:14 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:22.067 00:51:14 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:22.067 00:51:14 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:22.067 00:51:14 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:22.067 00:51:14 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:22.067 00:51:14 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:22.067 nvmf_trace.0 00:17:22.067 00:51:14 -- common/autotest_common.sh@809 -- # return 0 00:17:22.067 00:51:14 -- target/tls.sh@16 -- # killprocess 1703147 00:17:22.067 00:51:14 -- common/autotest_common.sh@936 -- # '[' -z 1703147 ']' 00:17:22.067 00:51:14 -- common/autotest_common.sh@940 -- # kill -0 1703147 00:17:22.067 00:51:14 -- common/autotest_common.sh@941 -- # uname 00:17:22.067 00:51:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.067 00:51:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1703147 00:17:22.067 00:51:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:22.067 00:51:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:22.067 00:51:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1703147' 00:17:22.067 killing process with pid 1703147 00:17:22.067 00:51:14 -- common/autotest_common.sh@955 -- # kill 1703147 00:17:22.067 Received shutdown signal, test time was about 1.000000 seconds 00:17:22.067 00:17:22.067 Latency(us) 00:17:22.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.067 =================================================================================================================== 00:17:22.067 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.067 00:51:14 -- common/autotest_common.sh@960 -- # wait 1703147 00:17:22.067 00:51:14 -- target/tls.sh@17 -- # nvmftestfini 00:17:22.067 00:51:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:22.067 00:51:14 -- nvmf/common.sh@117 -- # sync 00:17:22.067 00:51:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:22.067 00:51:14 -- nvmf/common.sh@120 -- # set +e 00:17:22.067 00:51:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:22.067 00:51:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:22.067 rmmod nvme_tcp 00:17:22.067 rmmod nvme_fabrics 00:17:22.067 rmmod nvme_keyring 00:17:22.067 00:51:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:22.067 00:51:14 -- nvmf/common.sh@124 -- # set -e 00:17:22.067 00:51:14 -- nvmf/common.sh@125 -- # return 0 00:17:22.067 00:51:14 -- nvmf/common.sh@478 -- # '[' -n 1702901 ']' 00:17:22.067 00:51:14 -- nvmf/common.sh@479 -- # killprocess 1702901 00:17:22.067 00:51:14 -- common/autotest_common.sh@936 -- # '[' -z 1702901 ']' 00:17:22.067 00:51:14 -- common/autotest_common.sh@940 -- # kill -0 1702901 00:17:22.067 00:51:14 -- common/autotest_common.sh@941 -- # uname 00:17:22.067 00:51:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:22.067 00:51:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1702901 00:17:22.067 00:51:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:22.068 00:51:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:22.068 00:51:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1702901' 00:17:22.068 killing process with pid 1702901 00:17:22.068 00:51:14 -- common/autotest_common.sh@955 -- # kill 1702901 00:17:22.068 00:51:14 -- common/autotest_common.sh@960 -- # wait 1702901 00:17:22.326 00:51:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:22.326 00:51:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:22.326 00:51:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:22.326 00:51:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.326 00:51:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:22.326 00:51:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.326 00:51:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.326 00:51:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.861 00:51:17 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:24.861 00:51:17 -- target/tls.sh@18 -- # rm -f /tmp/tmp.uk9QsBLmqK /tmp/tmp.b3FoyXnB7P /tmp/tmp.Y5HQJPkhYG 00:17:24.861 00:17:24.861 real 1m23.140s 00:17:24.861 user 2m9.330s 00:17:24.861 sys 0m27.080s 00:17:24.861 00:51:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.861 00:51:17 -- common/autotest_common.sh@10 -- # set +x 00:17:24.861 ************************************ 00:17:24.861 END TEST nvmf_tls 00:17:24.861 ************************************ 00:17:24.861 00:51:17 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:24.861 00:51:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:24.861 00:51:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:24.861 00:51:17 -- common/autotest_common.sh@10 -- # set +x 00:17:24.861 ************************************ 00:17:24.861 START TEST nvmf_fips 00:17:24.861 ************************************ 00:17:24.861 00:51:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:24.861 * Looking for test storage... 00:17:24.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:24.861 00:51:17 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.861 00:51:17 -- nvmf/common.sh@7 -- # uname -s 00:17:24.861 00:51:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.861 00:51:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.861 00:51:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.861 00:51:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.861 00:51:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.861 00:51:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.861 00:51:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.861 00:51:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.861 00:51:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.861 00:51:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.861 00:51:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.861 00:51:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.861 00:51:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.861 00:51:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.861 00:51:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.861 00:51:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.861 00:51:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.861 00:51:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.861 00:51:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.861 00:51:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.861 00:51:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.861 00:51:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.861 00:51:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.861 00:51:17 -- paths/export.sh@5 -- # export PATH 00:17:24.861 00:51:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.861 00:51:17 -- nvmf/common.sh@47 -- # : 0 00:17:24.861 00:51:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:24.861 00:51:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:24.861 00:51:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:24.861 00:51:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.861 00:51:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.861 00:51:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:24.861 00:51:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:24.861 00:51:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:24.861 00:51:17 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.861 00:51:17 -- fips/fips.sh@89 -- # check_openssl_version 00:17:24.861 00:51:17 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:24.861 00:51:17 -- fips/fips.sh@85 -- # openssl version 00:17:24.861 00:51:17 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:24.861 00:51:17 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:24.861 00:51:17 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:24.861 00:51:17 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:24.861 00:51:17 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:24.861 00:51:17 -- scripts/common.sh@333 -- # IFS=.-: 00:17:24.861 00:51:17 -- scripts/common.sh@333 -- # read -ra ver1 00:17:24.861 00:51:17 -- scripts/common.sh@334 -- # IFS=.-: 00:17:24.861 00:51:17 -- scripts/common.sh@334 -- # read -ra ver2 00:17:24.861 00:51:17 -- scripts/common.sh@335 -- # local 'op=>=' 00:17:24.861 00:51:17 -- scripts/common.sh@337 -- # ver1_l=3 00:17:24.861 00:51:17 -- scripts/common.sh@338 -- # ver2_l=3 00:17:24.861 00:51:17 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:24.861 00:51:17 -- scripts/common.sh@341 -- # case "$op" in 00:17:24.861 00:51:17 -- scripts/common.sh@345 -- # : 1 00:17:24.861 00:51:17 -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:24.861 00:51:17 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.861 00:51:17 -- scripts/common.sh@362 -- # decimal 3 00:17:24.861 00:51:17 -- scripts/common.sh@350 -- # local d=3 00:17:24.861 00:51:17 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:24.861 00:51:17 -- scripts/common.sh@352 -- # echo 3 00:17:24.861 00:51:17 -- scripts/common.sh@362 -- # ver1[v]=3 00:17:24.861 00:51:17 -- scripts/common.sh@363 -- # decimal 3 00:17:24.861 00:51:17 -- scripts/common.sh@350 -- # local d=3 00:17:24.861 00:51:17 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:24.861 00:51:17 -- scripts/common.sh@352 -- # echo 3 00:17:24.861 00:51:17 -- scripts/common.sh@363 -- # ver2[v]=3 00:17:24.861 00:51:17 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:24.861 00:51:17 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:24.861 00:51:17 -- scripts/common.sh@361 -- # (( v++ )) 00:17:24.861 00:51:17 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.861 00:51:17 -- scripts/common.sh@362 -- # decimal 0 00:17:24.861 00:51:17 -- scripts/common.sh@350 -- # local d=0 00:17:24.861 00:51:17 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:24.861 00:51:17 -- scripts/common.sh@352 -- # echo 0 00:17:24.861 00:51:17 -- scripts/common.sh@362 -- # ver1[v]=0 00:17:24.862 00:51:17 -- scripts/common.sh@363 -- # decimal 0 00:17:24.862 00:51:17 -- scripts/common.sh@350 -- # local d=0 00:17:24.862 00:51:17 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:24.862 00:51:17 -- scripts/common.sh@352 -- # echo 0 00:17:24.862 00:51:17 -- scripts/common.sh@363 -- # ver2[v]=0 00:17:24.862 00:51:17 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:24.862 00:51:17 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:24.862 00:51:17 -- scripts/common.sh@361 -- # (( v++ )) 00:17:24.862 00:51:17 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.862 00:51:17 -- scripts/common.sh@362 -- # decimal 9 00:17:24.862 00:51:17 -- scripts/common.sh@350 -- # local d=9 00:17:24.862 00:51:17 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:24.862 00:51:17 -- scripts/common.sh@352 -- # echo 9 00:17:24.862 00:51:17 -- scripts/common.sh@362 -- # ver1[v]=9 00:17:24.862 00:51:17 -- scripts/common.sh@363 -- # decimal 0 00:17:24.862 00:51:17 -- scripts/common.sh@350 -- # local d=0 00:17:24.862 00:51:17 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:24.862 00:51:17 -- scripts/common.sh@352 -- # echo 0 00:17:24.862 00:51:17 -- scripts/common.sh@363 -- # ver2[v]=0 00:17:24.862 00:51:17 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:24.862 00:51:17 -- scripts/common.sh@364 -- # return 0 00:17:24.862 00:51:17 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:24.862 00:51:17 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:24.862 00:51:17 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:24.862 00:51:17 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:24.862 00:51:17 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:24.862 00:51:17 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:24.862 00:51:17 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:24.862 00:51:17 -- fips/fips.sh@113 -- # build_openssl_config 00:17:24.862 00:51:17 -- fips/fips.sh@37 -- # cat 00:17:24.862 00:51:17 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:24.862 00:51:17 -- fips/fips.sh@58 -- # cat - 00:17:24.862 00:51:17 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:24.862 00:51:17 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:24.862 00:51:17 -- fips/fips.sh@116 -- # mapfile -t providers 00:17:24.862 00:51:17 -- fips/fips.sh@116 -- # openssl list -providers 00:17:24.862 00:51:17 -- fips/fips.sh@116 -- # grep name 00:17:24.862 00:51:17 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:24.862 00:51:17 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:24.862 00:51:17 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:24.862 00:51:17 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:24.862 00:51:17 -- common/autotest_common.sh@638 -- # local es=0 00:17:24.862 00:51:17 -- fips/fips.sh@127 -- # : 00:17:24.862 00:51:17 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:24.862 00:51:17 -- common/autotest_common.sh@626 -- # local arg=openssl 00:17:24.862 00:51:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:24.862 00:51:17 -- common/autotest_common.sh@630 -- # type -t openssl 00:17:24.862 00:51:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:24.862 00:51:17 -- common/autotest_common.sh@632 -- # type -P openssl 00:17:24.862 00:51:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:24.862 00:51:17 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:17:24.862 00:51:17 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:17:24.862 00:51:17 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:17:24.862 Error setting digest 00:17:24.862 0002882EEC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:24.862 0002882EEC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:24.862 00:51:17 -- common/autotest_common.sh@641 -- # es=1 00:17:24.862 00:51:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:24.862 00:51:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:24.862 00:51:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:24.862 00:51:17 -- fips/fips.sh@130 -- # nvmftestinit 00:17:24.862 00:51:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:24.862 00:51:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.862 00:51:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:24.862 00:51:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:24.862 00:51:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:24.862 00:51:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.862 00:51:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.862 00:51:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.862 00:51:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:24.862 00:51:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:24.862 00:51:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:24.862 00:51:17 -- common/autotest_common.sh@10 -- # set +x 00:17:30.134 00:51:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:30.134 00:51:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.134 00:51:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.134 00:51:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.134 00:51:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.134 00:51:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.134 00:51:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.134 00:51:22 -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.134 00:51:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.134 00:51:22 -- nvmf/common.sh@296 -- # e810=() 00:17:30.134 00:51:22 -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.134 00:51:22 -- nvmf/common.sh@297 -- # x722=() 00:17:30.134 00:51:22 -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.134 00:51:22 -- nvmf/common.sh@298 -- # mlx=() 00:17:30.134 00:51:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.134 00:51:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.134 00:51:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.134 00:51:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.135 00:51:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.135 00:51:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.135 00:51:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.135 00:51:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.135 00:51:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.135 00:51:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.135 00:51:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.135 00:51:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.135 00:51:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.135 00:51:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.135 00:51:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.135 00:51:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.135 00:51:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:30.135 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:30.135 00:51:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.135 00:51:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:30.135 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:30.135 00:51:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.135 00:51:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.135 00:51:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.135 00:51:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:30.135 00:51:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.135 00:51:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:30.135 Found net devices under 0000:86:00.0: cvl_0_0 00:17:30.135 00:51:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.135 00:51:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.135 00:51:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.135 00:51:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:30.135 00:51:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.135 00:51:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:30.135 Found net devices under 0000:86:00.1: cvl_0_1 00:17:30.135 00:51:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.135 00:51:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:30.135 00:51:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:30.135 00:51:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:30.135 00:51:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.135 00:51:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.135 00:51:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.135 00:51:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.135 00:51:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.135 00:51:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.135 00:51:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.135 00:51:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.135 00:51:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.135 00:51:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.135 00:51:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.135 00:51:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.135 00:51:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.135 00:51:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.135 00:51:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.135 00:51:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.135 00:51:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.135 00:51:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.135 00:51:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.135 00:51:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:17:30.135 00:17:30.135 --- 10.0.0.2 ping statistics --- 00:17:30.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.135 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:17:30.135 00:51:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:17:30.135 00:17:30.135 --- 10.0.0.1 ping statistics --- 00:17:30.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.135 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:17:30.135 00:51:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.135 00:51:22 -- nvmf/common.sh@411 -- # return 0 00:17:30.135 00:51:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:30.135 00:51:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.135 00:51:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:30.135 00:51:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.135 00:51:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:30.135 00:51:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:30.135 00:51:22 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:30.135 00:51:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:30.135 00:51:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:30.135 00:51:22 -- common/autotest_common.sh@10 -- # set +x 00:17:30.135 00:51:22 -- nvmf/common.sh@470 -- # nvmfpid=1706942 00:17:30.135 00:51:22 -- nvmf/common.sh@471 -- # waitforlisten 1706942 00:17:30.135 00:51:22 -- common/autotest_common.sh@817 -- # '[' -z 1706942 ']' 00:17:30.135 00:51:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.135 00:51:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:30.135 00:51:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.135 00:51:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.135 00:51:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:30.135 00:51:22 -- common/autotest_common.sh@10 -- # set +x 00:17:30.135 [2024-04-27 00:51:22.508954] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:30.135 [2024-04-27 00:51:22.509000] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.135 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.135 [2024-04-27 00:51:22.564877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.135 [2024-04-27 00:51:22.640226] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.135 [2024-04-27 00:51:22.640258] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.135 [2024-04-27 00:51:22.640265] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.135 [2024-04-27 00:51:22.640271] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.135 [2024-04-27 00:51:22.640276] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.135 [2024-04-27 00:51:22.640290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.705 00:51:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:30.705 00:51:23 -- common/autotest_common.sh@850 -- # return 0 00:17:30.705 00:51:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:30.705 00:51:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:30.705 00:51:23 -- common/autotest_common.sh@10 -- # set +x 00:17:30.705 00:51:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.705 00:51:23 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:17:30.705 00:51:23 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:30.705 00:51:23 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:30.705 00:51:23 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:30.705 00:51:23 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:30.705 00:51:23 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:30.705 00:51:23 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:30.705 00:51:23 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:30.964 [2024-04-27 00:51:23.474151] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.964 [2024-04-27 00:51:23.490142] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:30.964 [2024-04-27 00:51:23.490294] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.964 [2024-04-27 00:51:23.518341] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:30.964 malloc0 00:17:30.964 00:51:23 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.964 00:51:23 -- fips/fips.sh@147 -- # bdevperf_pid=1707192 00:17:30.964 00:51:23 -- fips/fips.sh@148 -- # waitforlisten 1707192 /var/tmp/bdevperf.sock 00:17:30.964 00:51:23 -- common/autotest_common.sh@817 -- # '[' -z 1707192 ']' 00:17:30.964 00:51:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.964 00:51:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:30.964 00:51:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.964 00:51:23 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:30.964 00:51:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:30.964 00:51:23 -- common/autotest_common.sh@10 -- # set +x 00:17:30.964 [2024-04-27 00:51:23.596224] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:30.964 [2024-04-27 00:51:23.596270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707192 ] 00:17:30.964 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.964 [2024-04-27 00:51:23.644783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.222 [2024-04-27 00:51:23.716628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.790 00:51:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:31.790 00:51:24 -- common/autotest_common.sh@850 -- # return 0 00:17:31.790 00:51:24 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:32.049 [2024-04-27 00:51:24.531198] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:32.049 [2024-04-27 00:51:24.531280] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:32.049 TLSTESTn1 00:17:32.049 00:51:24 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:32.049 Running I/O for 10 seconds... 00:17:44.262 00:17:44.262 Latency(us) 00:17:44.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.262 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:44.262 Verification LBA range: start 0x0 length 0x2000 00:17:44.262 TLSTESTn1 : 10.09 1469.63 5.74 0.00 0.00 86779.45 7208.96 127652.73 00:17:44.262 =================================================================================================================== 00:17:44.262 Total : 1469.63 5.74 0.00 0.00 86779.45 7208.96 127652.73 00:17:44.262 0 00:17:44.262 00:51:34 -- fips/fips.sh@1 -- # cleanup 00:17:44.262 00:51:34 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:44.262 00:51:34 -- common/autotest_common.sh@794 -- # type=--id 00:17:44.262 00:51:34 -- common/autotest_common.sh@795 -- # id=0 00:17:44.262 00:51:34 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:17:44.262 00:51:34 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:44.262 00:51:34 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:17:44.262 00:51:34 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:17:44.262 00:51:34 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:17:44.262 00:51:34 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:44.262 nvmf_trace.0 00:17:44.262 00:51:34 -- common/autotest_common.sh@809 -- # return 0 00:17:44.262 00:51:34 -- fips/fips.sh@16 -- # killprocess 1707192 00:17:44.262 00:51:34 -- common/autotest_common.sh@936 -- # '[' -z 1707192 ']' 00:17:44.262 00:51:34 -- common/autotest_common.sh@940 -- # kill -0 1707192 00:17:44.262 00:51:34 -- common/autotest_common.sh@941 -- # uname 00:17:44.262 00:51:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.262 00:51:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1707192 00:17:44.262 00:51:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:44.262 00:51:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:44.262 00:51:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1707192' 00:17:44.262 killing process with pid 1707192 00:17:44.262 00:51:34 -- common/autotest_common.sh@955 -- # kill 1707192 00:17:44.262 Received shutdown signal, test time was about 10.000000 seconds 00:17:44.262 00:17:44.262 Latency(us) 00:17:44.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.262 =================================================================================================================== 00:17:44.262 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:44.262 [2024-04-27 00:51:34.972237] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:44.262 00:51:34 -- common/autotest_common.sh@960 -- # wait 1707192 00:17:44.262 00:51:35 -- fips/fips.sh@17 -- # nvmftestfini 00:17:44.262 00:51:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:44.262 00:51:35 -- nvmf/common.sh@117 -- # sync 00:17:44.262 00:51:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:44.262 00:51:35 -- nvmf/common.sh@120 -- # set +e 00:17:44.262 00:51:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:44.262 00:51:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:44.262 rmmod nvme_tcp 00:17:44.262 rmmod nvme_fabrics 00:17:44.262 rmmod nvme_keyring 00:17:44.263 00:51:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:44.263 00:51:35 -- nvmf/common.sh@124 -- # set -e 00:17:44.263 00:51:35 -- nvmf/common.sh@125 -- # return 0 00:17:44.263 00:51:35 -- nvmf/common.sh@478 -- # '[' -n 1706942 ']' 00:17:44.263 00:51:35 -- nvmf/common.sh@479 -- # killprocess 1706942 00:17:44.263 00:51:35 -- common/autotest_common.sh@936 -- # '[' -z 1706942 ']' 00:17:44.263 00:51:35 -- common/autotest_common.sh@940 -- # kill -0 1706942 00:17:44.263 00:51:35 -- common/autotest_common.sh@941 -- # uname 00:17:44.263 00:51:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:44.263 00:51:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1706942 00:17:44.263 00:51:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:44.263 00:51:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:44.263 00:51:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1706942' 00:17:44.263 killing process with pid 1706942 00:17:44.263 00:51:35 -- common/autotest_common.sh@955 -- # kill 1706942 00:17:44.263 [2024-04-27 00:51:35.283820] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:44.263 00:51:35 -- common/autotest_common.sh@960 -- # wait 1706942 00:17:44.263 00:51:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:44.263 00:51:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:44.263 00:51:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:44.263 00:51:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.263 00:51:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.263 00:51:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.263 00:51:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.263 00:51:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.202 00:51:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:45.202 00:51:37 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:45.202 00:17:45.202 real 0m20.366s 00:17:45.202 user 0m22.787s 00:17:45.202 sys 0m8.439s 00:17:45.202 00:51:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:45.202 00:51:37 -- common/autotest_common.sh@10 -- # set +x 00:17:45.202 ************************************ 00:17:45.202 END TEST nvmf_fips 00:17:45.202 ************************************ 00:17:45.202 00:51:37 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:17:45.202 00:51:37 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:17:45.202 00:51:37 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:17:45.202 00:51:37 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:17:45.202 00:51:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:45.202 00:51:37 -- common/autotest_common.sh@10 -- # set +x 00:17:49.399 00:51:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:49.399 00:51:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:49.399 00:51:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:49.399 00:51:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:49.399 00:51:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:49.399 00:51:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:49.399 00:51:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:49.399 00:51:41 -- nvmf/common.sh@295 -- # net_devs=() 00:17:49.399 00:51:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:49.399 00:51:41 -- nvmf/common.sh@296 -- # e810=() 00:17:49.399 00:51:41 -- nvmf/common.sh@296 -- # local -ga e810 00:17:49.399 00:51:41 -- nvmf/common.sh@297 -- # x722=() 00:17:49.399 00:51:41 -- nvmf/common.sh@297 -- # local -ga x722 00:17:49.399 00:51:41 -- nvmf/common.sh@298 -- # mlx=() 00:17:49.399 00:51:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:49.399 00:51:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.399 00:51:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:49.399 00:51:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:49.399 00:51:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:49.399 00:51:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.399 00:51:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:49.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:49.399 00:51:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:49.399 00:51:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:49.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:49.399 00:51:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:49.399 00:51:41 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:49.399 00:51:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.399 00:51:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.399 00:51:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:49.399 00:51:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.399 00:51:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:49.399 Found net devices under 0000:86:00.0: cvl_0_0 00:17:49.399 00:51:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.399 00:51:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:49.399 00:51:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.399 00:51:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:49.399 00:51:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.399 00:51:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:49.399 Found net devices under 0000:86:00.1: cvl_0_1 00:17:49.399 00:51:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.399 00:51:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:49.399 00:51:42 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.399 00:51:42 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:17:49.399 00:51:42 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:17:49.399 00:51:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:49.399 00:51:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:49.399 00:51:42 -- common/autotest_common.sh@10 -- # set +x 00:17:49.659 ************************************ 00:17:49.659 START TEST nvmf_perf_adq 00:17:49.659 ************************************ 00:17:49.659 00:51:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:17:49.659 * Looking for test storage... 00:17:49.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.659 00:51:42 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.659 00:51:42 -- nvmf/common.sh@7 -- # uname -s 00:17:49.659 00:51:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.659 00:51:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.659 00:51:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.659 00:51:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.659 00:51:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.659 00:51:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.659 00:51:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.659 00:51:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.659 00:51:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.659 00:51:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.659 00:51:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.659 00:51:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.659 00:51:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.659 00:51:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.659 00:51:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.659 00:51:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.659 00:51:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.659 00:51:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.659 00:51:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.659 00:51:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.659 00:51:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.659 00:51:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.659 00:51:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.659 00:51:42 -- paths/export.sh@5 -- # export PATH 00:17:49.659 00:51:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.659 00:51:42 -- nvmf/common.sh@47 -- # : 0 00:17:49.659 00:51:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:49.659 00:51:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:49.659 00:51:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.659 00:51:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.659 00:51:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.659 00:51:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:49.659 00:51:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:49.659 00:51:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:49.659 00:51:42 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:17:49.659 00:51:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:49.659 00:51:42 -- common/autotest_common.sh@10 -- # set +x 00:17:54.940 00:51:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:54.940 00:51:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:54.940 00:51:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:54.940 00:51:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:54.940 00:51:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:54.940 00:51:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:54.940 00:51:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:54.940 00:51:46 -- nvmf/common.sh@295 -- # net_devs=() 00:17:54.940 00:51:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:54.940 00:51:46 -- nvmf/common.sh@296 -- # e810=() 00:17:54.940 00:51:46 -- nvmf/common.sh@296 -- # local -ga e810 00:17:54.940 00:51:46 -- nvmf/common.sh@297 -- # x722=() 00:17:54.940 00:51:46 -- nvmf/common.sh@297 -- # local -ga x722 00:17:54.940 00:51:46 -- nvmf/common.sh@298 -- # mlx=() 00:17:54.940 00:51:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:54.940 00:51:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:54.940 00:51:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:54.940 00:51:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:54.940 00:51:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:54.940 00:51:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.940 00:51:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:54.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:54.940 00:51:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:54.940 00:51:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:54.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:54.940 00:51:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:54.940 00:51:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:54.940 00:51:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.940 00:51:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.940 00:51:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:54.940 00:51:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.940 00:51:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:54.940 Found net devices under 0000:86:00.0: cvl_0_0 00:17:54.940 00:51:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.940 00:51:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:54.940 00:51:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:54.940 00:51:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:54.940 00:51:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:54.940 00:51:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:54.940 Found net devices under 0000:86:00.1: cvl_0_1 00:17:54.940 00:51:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:54.940 00:51:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:54.940 00:51:46 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:54.940 00:51:46 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:17:54.940 00:51:46 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:17:54.940 00:51:46 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:17:54.940 00:51:46 -- target/perf_adq.sh@52 -- # rmmod ice 00:17:55.200 00:51:47 -- target/perf_adq.sh@53 -- # modprobe ice 00:17:57.144 00:51:49 -- target/perf_adq.sh@54 -- # sleep 5 00:18:02.467 00:51:54 -- target/perf_adq.sh@67 -- # nvmftestinit 00:18:02.467 00:51:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:02.467 00:51:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.467 00:51:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:02.467 00:51:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:02.467 00:51:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:02.467 00:51:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.467 00:51:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.467 00:51:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.467 00:51:54 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:02.467 00:51:54 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:02.467 00:51:54 -- common/autotest_common.sh@10 -- # set +x 00:18:02.467 00:51:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:02.467 00:51:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:02.467 00:51:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:02.467 00:51:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:02.467 00:51:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:02.467 00:51:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:02.467 00:51:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:02.467 00:51:54 -- nvmf/common.sh@295 -- # net_devs=() 00:18:02.467 00:51:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:02.467 00:51:54 -- nvmf/common.sh@296 -- # e810=() 00:18:02.467 00:51:54 -- nvmf/common.sh@296 -- # local -ga e810 00:18:02.467 00:51:54 -- nvmf/common.sh@297 -- # x722=() 00:18:02.467 00:51:54 -- nvmf/common.sh@297 -- # local -ga x722 00:18:02.467 00:51:54 -- nvmf/common.sh@298 -- # mlx=() 00:18:02.467 00:51:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:02.467 00:51:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:02.467 00:51:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:02.467 00:51:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:02.467 00:51:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:02.467 00:51:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.467 00:51:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:02.467 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:02.467 00:51:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:02.467 00:51:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:02.467 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:02.467 00:51:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:02.467 00:51:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.467 00:51:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.467 00:51:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:02.467 00:51:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.467 00:51:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:02.467 Found net devices under 0000:86:00.0: cvl_0_0 00:18:02.467 00:51:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.467 00:51:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:02.467 00:51:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:02.467 00:51:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:02.467 00:51:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:02.467 00:51:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:02.467 Found net devices under 0000:86:00.1: cvl_0_1 00:18:02.467 00:51:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:02.467 00:51:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:02.467 00:51:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:02.467 00:51:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:02.467 00:51:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:02.467 00:51:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.467 00:51:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:02.467 00:51:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:02.467 00:51:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:02.467 00:51:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:02.467 00:51:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:02.467 00:51:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:02.467 00:51:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:02.467 00:51:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.467 00:51:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:02.467 00:51:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:02.467 00:51:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:02.467 00:51:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:02.467 00:51:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:02.467 00:51:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:02.467 00:51:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:02.467 00:51:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:02.467 00:51:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:02.467 00:51:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:02.467 00:51:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:02.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:02.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:18:02.467 00:18:02.468 --- 10.0.0.2 ping statistics --- 00:18:02.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.468 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:18:02.468 00:51:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:02.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:02.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.426 ms 00:18:02.468 00:18:02.468 --- 10.0.0.1 ping statistics --- 00:18:02.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:02.468 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:18:02.468 00:51:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:02.468 00:51:54 -- nvmf/common.sh@411 -- # return 0 00:18:02.468 00:51:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:02.468 00:51:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:02.468 00:51:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:02.468 00:51:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:02.468 00:51:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:02.468 00:51:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:02.468 00:51:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:02.468 00:51:54 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:02.468 00:51:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:02.468 00:51:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:02.468 00:51:54 -- common/autotest_common.sh@10 -- # set +x 00:18:02.468 00:51:54 -- nvmf/common.sh@470 -- # nvmfpid=1716665 00:18:02.468 00:51:54 -- nvmf/common.sh@471 -- # waitforlisten 1716665 00:18:02.468 00:51:54 -- common/autotest_common.sh@817 -- # '[' -z 1716665 ']' 00:18:02.468 00:51:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.468 00:51:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:02.468 00:51:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.468 00:51:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:02.468 00:51:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:02.468 00:51:54 -- common/autotest_common.sh@10 -- # set +x 00:18:02.468 [2024-04-27 00:51:55.008476] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:02.468 [2024-04-27 00:51:55.008521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.468 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.468 [2024-04-27 00:51:55.064333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:02.468 [2024-04-27 00:51:55.143994] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.468 [2024-04-27 00:51:55.144031] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.468 [2024-04-27 00:51:55.144038] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.468 [2024-04-27 00:51:55.144045] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.468 [2024-04-27 00:51:55.144050] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.468 [2024-04-27 00:51:55.144093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:02.468 [2024-04-27 00:51:55.144144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.468 [2024-04-27 00:51:55.144162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:02.468 [2024-04-27 00:51:55.144163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.405 00:51:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:03.405 00:51:55 -- common/autotest_common.sh@850 -- # return 0 00:18:03.405 00:51:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:03.405 00:51:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:03.405 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:18:03.405 00:51:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.405 00:51:55 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:18:03.405 00:51:55 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:03.405 00:51:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:03.405 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:18:03.405 00:51:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:03.405 00:51:55 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:18:03.405 00:51:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:03.405 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:18:03.405 00:51:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:03.405 00:51:55 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:03.405 00:51:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:03.406 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:18:03.406 [2024-04-27 00:51:55.957856] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.406 00:51:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:03.406 00:51:55 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:03.406 00:51:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:03.406 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:18:03.406 Malloc1 00:18:03.406 00:51:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:03.406 00:51:55 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:03.406 00:51:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:03.406 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:18:03.406 00:51:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:03.406 00:51:55 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:03.406 00:51:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:03.406 00:51:55 -- common/autotest_common.sh@10 -- # set +x 00:18:03.406 00:51:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:03.406 00:51:56 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.406 00:51:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:03.406 00:51:56 -- common/autotest_common.sh@10 -- # set +x 00:18:03.406 [2024-04-27 00:51:56.009705] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.406 00:51:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:03.406 00:51:56 -- target/perf_adq.sh@73 -- # perfpid=1716917 00:18:03.406 00:51:56 -- target/perf_adq.sh@74 -- # sleep 2 00:18:03.406 00:51:56 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:03.406 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.939 00:51:58 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:18:05.939 00:51:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:05.939 00:51:58 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:05.939 00:51:58 -- target/perf_adq.sh@76 -- # wc -l 00:18:05.939 00:51:58 -- common/autotest_common.sh@10 -- # set +x 00:18:05.939 00:51:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:05.939 00:51:58 -- target/perf_adq.sh@76 -- # count=4 00:18:05.939 00:51:58 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:18:05.939 00:51:58 -- target/perf_adq.sh@81 -- # wait 1716917 00:18:14.057 Initializing NVMe Controllers 00:18:14.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:14.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:14.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:14.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:14.057 Initialization complete. Launching workers. 00:18:14.057 ======================================================== 00:18:14.057 Latency(us) 00:18:14.057 Device Information : IOPS MiB/s Average min max 00:18:14.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10015.09 39.12 6390.41 1861.20 11197.53 00:18:14.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9353.69 36.54 6842.49 1518.15 17360.76 00:18:14.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9825.99 38.38 6513.39 1695.32 14236.72 00:18:14.057 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9817.39 38.35 6535.77 1605.68 47420.18 00:18:14.057 ======================================================== 00:18:14.057 Total : 39012.15 152.39 6566.36 1518.15 47420.18 00:18:14.057 00:18:14.057 00:52:06 -- target/perf_adq.sh@82 -- # nvmftestfini 00:18:14.057 00:52:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:14.057 00:52:06 -- nvmf/common.sh@117 -- # sync 00:18:14.057 00:52:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.057 00:52:06 -- nvmf/common.sh@120 -- # set +e 00:18:14.057 00:52:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.057 00:52:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.057 rmmod nvme_tcp 00:18:14.057 rmmod nvme_fabrics 00:18:14.057 rmmod nvme_keyring 00:18:14.057 00:52:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.057 00:52:06 -- nvmf/common.sh@124 -- # set -e 00:18:14.057 00:52:06 -- nvmf/common.sh@125 -- # return 0 00:18:14.057 00:52:06 -- nvmf/common.sh@478 -- # '[' -n 1716665 ']' 00:18:14.057 00:52:06 -- nvmf/common.sh@479 -- # killprocess 1716665 00:18:14.057 00:52:06 -- common/autotest_common.sh@936 -- # '[' -z 1716665 ']' 00:18:14.057 00:52:06 -- common/autotest_common.sh@940 -- # kill -0 1716665 00:18:14.057 00:52:06 -- common/autotest_common.sh@941 -- # uname 00:18:14.057 00:52:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:14.057 00:52:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1716665 00:18:14.057 00:52:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:14.057 00:52:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:14.057 00:52:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1716665' 00:18:14.057 killing process with pid 1716665 00:18:14.057 00:52:06 -- common/autotest_common.sh@955 -- # kill 1716665 00:18:14.057 00:52:06 -- common/autotest_common.sh@960 -- # wait 1716665 00:18:14.057 00:52:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:14.057 00:52:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:14.057 00:52:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:14.057 00:52:06 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.057 00:52:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.057 00:52:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.057 00:52:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.057 00:52:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.965 00:52:08 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:15.965 00:52:08 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:18:15.965 00:52:08 -- target/perf_adq.sh@52 -- # rmmod ice 00:18:17.346 00:52:09 -- target/perf_adq.sh@53 -- # modprobe ice 00:18:19.276 00:52:11 -- target/perf_adq.sh@54 -- # sleep 5 00:18:24.550 00:52:16 -- target/perf_adq.sh@87 -- # nvmftestinit 00:18:24.550 00:52:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:24.550 00:52:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.550 00:52:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:24.550 00:52:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:24.550 00:52:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:24.550 00:52:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.550 00:52:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.550 00:52:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.550 00:52:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:24.550 00:52:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:24.550 00:52:16 -- common/autotest_common.sh@10 -- # set +x 00:18:24.550 00:52:16 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:24.550 00:52:16 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:24.550 00:52:16 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:24.550 00:52:16 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:24.550 00:52:16 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:24.550 00:52:16 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:24.550 00:52:16 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:24.550 00:52:16 -- nvmf/common.sh@295 -- # net_devs=() 00:18:24.550 00:52:16 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:24.550 00:52:16 -- nvmf/common.sh@296 -- # e810=() 00:18:24.550 00:52:16 -- nvmf/common.sh@296 -- # local -ga e810 00:18:24.550 00:52:16 -- nvmf/common.sh@297 -- # x722=() 00:18:24.550 00:52:16 -- nvmf/common.sh@297 -- # local -ga x722 00:18:24.550 00:52:16 -- nvmf/common.sh@298 -- # mlx=() 00:18:24.550 00:52:16 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:24.550 00:52:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.550 00:52:16 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:24.550 00:52:16 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:24.550 00:52:16 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:24.550 00:52:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.550 00:52:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:24.550 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:24.550 00:52:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:24.550 00:52:16 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:24.550 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:24.550 00:52:16 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:24.550 00:52:16 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.550 00:52:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.550 00:52:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:24.550 00:52:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.550 00:52:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:24.550 Found net devices under 0000:86:00.0: cvl_0_0 00:18:24.550 00:52:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.550 00:52:16 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:24.550 00:52:16 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.550 00:52:16 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:24.550 00:52:16 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.550 00:52:16 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:24.550 Found net devices under 0000:86:00.1: cvl_0_1 00:18:24.550 00:52:16 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.550 00:52:16 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:24.550 00:52:16 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:24.550 00:52:16 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:24.550 00:52:16 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:24.550 00:52:16 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.550 00:52:16 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.550 00:52:16 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:24.550 00:52:16 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:24.550 00:52:16 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:24.550 00:52:16 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:24.550 00:52:16 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:24.550 00:52:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:24.550 00:52:16 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.550 00:52:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:24.550 00:52:16 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:24.550 00:52:16 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:24.550 00:52:16 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:24.550 00:52:16 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:24.550 00:52:16 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:24.550 00:52:16 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:24.550 00:52:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:24.550 00:52:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:24.550 00:52:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:24.550 00:52:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:24.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:18:24.550 00:18:24.550 --- 10.0.0.2 ping statistics --- 00:18:24.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.550 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:24.550 00:52:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:24.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:18:24.550 00:18:24.550 --- 10.0.0.1 ping statistics --- 00:18:24.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.550 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:18:24.550 00:52:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.550 00:52:17 -- nvmf/common.sh@411 -- # return 0 00:18:24.550 00:52:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:24.550 00:52:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.550 00:52:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:24.550 00:52:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:24.550 00:52:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.550 00:52:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:24.550 00:52:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:24.550 00:52:17 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:18:24.550 00:52:17 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:18:24.550 00:52:17 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:18:24.550 00:52:17 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:18:24.550 net.core.busy_poll = 1 00:18:24.550 00:52:17 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:18:24.550 net.core.busy_read = 1 00:18:24.550 00:52:17 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:18:24.550 00:52:17 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:18:24.810 00:52:17 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:18:24.810 00:52:17 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:18:24.810 00:52:17 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:18:24.810 00:52:17 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:24.810 00:52:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:24.810 00:52:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:24.810 00:52:17 -- common/autotest_common.sh@10 -- # set +x 00:18:24.810 00:52:17 -- nvmf/common.sh@470 -- # nvmfpid=1720706 00:18:24.810 00:52:17 -- nvmf/common.sh@471 -- # waitforlisten 1720706 00:18:24.810 00:52:17 -- common/autotest_common.sh@817 -- # '[' -z 1720706 ']' 00:18:24.810 00:52:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.810 00:52:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.810 00:52:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.810 00:52:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.810 00:52:17 -- common/autotest_common.sh@10 -- # set +x 00:18:24.810 00:52:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:24.810 [2024-04-27 00:52:17.369937] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:24.810 [2024-04-27 00:52:17.369980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.810 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.810 [2024-04-27 00:52:17.425694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:24.810 [2024-04-27 00:52:17.503444] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.810 [2024-04-27 00:52:17.503480] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.810 [2024-04-27 00:52:17.503487] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.810 [2024-04-27 00:52:17.503493] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.810 [2024-04-27 00:52:17.503498] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.810 [2024-04-27 00:52:17.503533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.810 [2024-04-27 00:52:17.503552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.810 [2024-04-27 00:52:17.503568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:24.810 [2024-04-27 00:52:17.503569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.753 00:52:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:25.753 00:52:18 -- common/autotest_common.sh@850 -- # return 0 00:18:25.753 00:52:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:25.753 00:52:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:25.753 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.753 00:52:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.753 00:52:18 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:18:25.753 00:52:18 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:18:25.753 00:52:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.753 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.753 00:52:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.753 00:52:18 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:18:25.753 00:52:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.753 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.753 00:52:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.753 00:52:18 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:18:25.753 00:52:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.753 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.753 [2024-04-27 00:52:18.313776] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.753 00:52:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.753 00:52:18 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:25.753 00:52:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.753 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.753 Malloc1 00:18:25.753 00:52:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.753 00:52:18 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:25.753 00:52:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.753 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.753 00:52:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.753 00:52:18 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:25.753 00:52:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.753 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.753 00:52:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.753 00:52:18 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.753 00:52:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:25.753 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:18:25.753 [2024-04-27 00:52:18.361617] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.753 00:52:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:25.753 00:52:18 -- target/perf_adq.sh@94 -- # perfpid=1720951 00:18:25.753 00:52:18 -- target/perf_adq.sh@95 -- # sleep 2 00:18:25.753 00:52:18 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:25.753 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.290 00:52:20 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:18:28.290 00:52:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.290 00:52:20 -- target/perf_adq.sh@97 -- # wc -l 00:18:28.290 00:52:20 -- common/autotest_common.sh@10 -- # set +x 00:18:28.290 00:52:20 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:18:28.290 00:52:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.290 00:52:20 -- target/perf_adq.sh@97 -- # count=2 00:18:28.290 00:52:20 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:18:28.290 00:52:20 -- target/perf_adq.sh@103 -- # wait 1720951 00:18:36.623 Initializing NVMe Controllers 00:18:36.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:36.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:36.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:36.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:36.623 Initialization complete. Launching workers. 00:18:36.623 ======================================================== 00:18:36.623 Latency(us) 00:18:36.623 Device Information : IOPS MiB/s Average min max 00:18:36.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7126.10 27.84 9009.22 1977.11 55630.60 00:18:36.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6996.30 27.33 9179.70 1773.78 54211.85 00:18:36.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6691.40 26.14 9564.26 1816.62 55393.08 00:18:36.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6544.30 25.56 9780.97 1836.20 54513.01 00:18:36.623 ======================================================== 00:18:36.623 Total : 27358.09 106.87 9373.18 1773.78 55630.60 00:18:36.623 00:18:36.623 00:52:28 -- target/perf_adq.sh@104 -- # nvmftestfini 00:18:36.623 00:52:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:36.623 00:52:28 -- nvmf/common.sh@117 -- # sync 00:18:36.623 00:52:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:36.623 00:52:28 -- nvmf/common.sh@120 -- # set +e 00:18:36.623 00:52:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:36.623 00:52:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:36.623 rmmod nvme_tcp 00:18:36.623 rmmod nvme_fabrics 00:18:36.623 rmmod nvme_keyring 00:18:36.623 00:52:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:36.623 00:52:28 -- nvmf/common.sh@124 -- # set -e 00:18:36.623 00:52:28 -- nvmf/common.sh@125 -- # return 0 00:18:36.623 00:52:28 -- nvmf/common.sh@478 -- # '[' -n 1720706 ']' 00:18:36.623 00:52:28 -- nvmf/common.sh@479 -- # killprocess 1720706 00:18:36.623 00:52:28 -- common/autotest_common.sh@936 -- # '[' -z 1720706 ']' 00:18:36.623 00:52:28 -- common/autotest_common.sh@940 -- # kill -0 1720706 00:18:36.623 00:52:28 -- common/autotest_common.sh@941 -- # uname 00:18:36.623 00:52:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.623 00:52:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1720706 00:18:36.623 00:52:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:36.623 00:52:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:36.623 00:52:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1720706' 00:18:36.623 killing process with pid 1720706 00:18:36.623 00:52:28 -- common/autotest_common.sh@955 -- # kill 1720706 00:18:36.623 00:52:28 -- common/autotest_common.sh@960 -- # wait 1720706 00:18:36.623 00:52:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:36.623 00:52:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:36.623 00:52:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:36.623 00:52:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.623 00:52:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.623 00:52:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.623 00:52:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:36.623 00:52:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.918 00:52:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:39.918 00:52:31 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:18:39.918 00:18:39.918 real 0m49.774s 00:18:39.918 user 2m48.524s 00:18:39.918 sys 0m8.885s 00:18:39.918 00:52:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:39.918 00:52:31 -- common/autotest_common.sh@10 -- # set +x 00:18:39.918 ************************************ 00:18:39.918 END TEST nvmf_perf_adq 00:18:39.918 ************************************ 00:18:39.918 00:52:31 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:39.918 00:52:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:39.918 00:52:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.918 00:52:31 -- common/autotest_common.sh@10 -- # set +x 00:18:39.918 ************************************ 00:18:39.918 START TEST nvmf_shutdown 00:18:39.918 ************************************ 00:18:39.918 00:52:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:39.918 * Looking for test storage... 00:18:39.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:39.918 00:52:32 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:39.918 00:52:32 -- nvmf/common.sh@7 -- # uname -s 00:18:39.918 00:52:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.918 00:52:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.918 00:52:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.918 00:52:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.918 00:52:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.918 00:52:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.918 00:52:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.918 00:52:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.918 00:52:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.918 00:52:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.918 00:52:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.918 00:52:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.918 00:52:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.918 00:52:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.918 00:52:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.918 00:52:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.918 00:52:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:39.918 00:52:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.918 00:52:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.918 00:52:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.918 00:52:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.918 00:52:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.918 00:52:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.918 00:52:32 -- paths/export.sh@5 -- # export PATH 00:18:39.918 00:52:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.918 00:52:32 -- nvmf/common.sh@47 -- # : 0 00:18:39.918 00:52:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.918 00:52:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.918 00:52:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.918 00:52:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.918 00:52:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.918 00:52:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.918 00:52:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.918 00:52:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.918 00:52:32 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:39.918 00:52:32 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:39.918 00:52:32 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:18:39.918 00:52:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:39.918 00:52:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.918 00:52:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.918 ************************************ 00:18:39.918 START TEST nvmf_shutdown_tc1 00:18:39.918 ************************************ 00:18:39.918 00:52:32 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:18:39.918 00:52:32 -- target/shutdown.sh@74 -- # starttarget 00:18:39.918 00:52:32 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:39.918 00:52:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:39.918 00:52:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.918 00:52:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:39.918 00:52:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:39.918 00:52:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:39.918 00:52:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.918 00:52:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.918 00:52:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.918 00:52:32 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:39.918 00:52:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:39.918 00:52:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:39.918 00:52:32 -- common/autotest_common.sh@10 -- # set +x 00:18:45.253 00:52:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:45.253 00:52:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:45.253 00:52:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:45.253 00:52:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:45.253 00:52:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:45.253 00:52:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:45.253 00:52:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:45.253 00:52:37 -- nvmf/common.sh@295 -- # net_devs=() 00:18:45.253 00:52:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:45.253 00:52:37 -- nvmf/common.sh@296 -- # e810=() 00:18:45.253 00:52:37 -- nvmf/common.sh@296 -- # local -ga e810 00:18:45.253 00:52:37 -- nvmf/common.sh@297 -- # x722=() 00:18:45.253 00:52:37 -- nvmf/common.sh@297 -- # local -ga x722 00:18:45.253 00:52:37 -- nvmf/common.sh@298 -- # mlx=() 00:18:45.253 00:52:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:45.253 00:52:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.253 00:52:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:45.253 00:52:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:45.253 00:52:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:45.253 00:52:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.253 00:52:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:45.253 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:45.253 00:52:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.253 00:52:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:45.253 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:45.253 00:52:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:45.253 00:52:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:45.253 00:52:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.253 00:52:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.253 00:52:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:45.253 00:52:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.253 00:52:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:45.253 Found net devices under 0000:86:00.0: cvl_0_0 00:18:45.253 00:52:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.253 00:52:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.253 00:52:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.253 00:52:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:45.253 00:52:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.253 00:52:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:45.253 Found net devices under 0000:86:00.1: cvl_0_1 00:18:45.253 00:52:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.253 00:52:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:45.253 00:52:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:45.253 00:52:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:45.254 00:52:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:45.254 00:52:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:45.254 00:52:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.254 00:52:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.254 00:52:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.254 00:52:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:45.254 00:52:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.254 00:52:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.254 00:52:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:45.254 00:52:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.254 00:52:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.254 00:52:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:45.254 00:52:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:45.254 00:52:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.254 00:52:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.254 00:52:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.254 00:52:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.254 00:52:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:45.254 00:52:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.254 00:52:37 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.254 00:52:37 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.254 00:52:37 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:45.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:18:45.254 00:18:45.254 --- 10.0.0.2 ping statistics --- 00:18:45.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.254 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:18:45.254 00:52:37 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:18:45.254 00:18:45.254 --- 10.0.0.1 ping statistics --- 00:18:45.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.254 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:18:45.254 00:52:37 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.254 00:52:37 -- nvmf/common.sh@411 -- # return 0 00:18:45.254 00:52:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:45.254 00:52:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.254 00:52:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:45.254 00:52:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:45.254 00:52:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.254 00:52:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:45.254 00:52:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:45.254 00:52:37 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:45.254 00:52:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:45.254 00:52:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:45.254 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:18:45.254 00:52:37 -- nvmf/common.sh@470 -- # nvmfpid=1726275 00:18:45.254 00:52:37 -- nvmf/common.sh@471 -- # waitforlisten 1726275 00:18:45.254 00:52:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:45.254 00:52:37 -- common/autotest_common.sh@817 -- # '[' -z 1726275 ']' 00:18:45.254 00:52:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.254 00:52:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:45.254 00:52:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.254 00:52:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:45.254 00:52:37 -- common/autotest_common.sh@10 -- # set +x 00:18:45.254 [2024-04-27 00:52:37.675470] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:45.254 [2024-04-27 00:52:37.675516] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.254 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.254 [2024-04-27 00:52:37.732028] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.254 [2024-04-27 00:52:37.809684] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.254 [2024-04-27 00:52:37.809720] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.254 [2024-04-27 00:52:37.809727] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.254 [2024-04-27 00:52:37.809733] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.254 [2024-04-27 00:52:37.809739] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.254 [2024-04-27 00:52:37.809833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.254 [2024-04-27 00:52:37.809919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.254 [2024-04-27 00:52:37.810026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.254 [2024-04-27 00:52:37.810027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:45.824 00:52:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:45.824 00:52:38 -- common/autotest_common.sh@850 -- # return 0 00:18:45.824 00:52:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:45.824 00:52:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:45.824 00:52:38 -- common/autotest_common.sh@10 -- # set +x 00:18:46.084 00:52:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.084 00:52:38 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:46.084 00:52:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.084 00:52:38 -- common/autotest_common.sh@10 -- # set +x 00:18:46.084 [2024-04-27 00:52:38.523983] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.084 00:52:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.084 00:52:38 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:46.084 00:52:38 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:46.084 00:52:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:46.084 00:52:38 -- common/autotest_common.sh@10 -- # set +x 00:18:46.084 00:52:38 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:46.084 00:52:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:46.084 00:52:38 -- target/shutdown.sh@28 -- # cat 00:18:46.084 00:52:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:46.084 00:52:38 -- target/shutdown.sh@28 -- # cat 00:18:46.084 00:52:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:46.084 00:52:38 -- target/shutdown.sh@28 -- # cat 00:18:46.084 00:52:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:46.084 00:52:38 -- target/shutdown.sh@28 -- # cat 00:18:46.084 00:52:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:46.084 00:52:38 -- target/shutdown.sh@28 -- # cat 00:18:46.084 00:52:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:46.084 00:52:38 -- target/shutdown.sh@28 -- # cat 00:18:46.085 00:52:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:46.085 00:52:38 -- target/shutdown.sh@28 -- # cat 00:18:46.085 00:52:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:46.085 00:52:38 -- target/shutdown.sh@28 -- # cat 00:18:46.085 00:52:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:46.085 00:52:38 -- target/shutdown.sh@28 -- # cat 00:18:46.085 00:52:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:46.085 00:52:38 -- target/shutdown.sh@28 -- # cat 00:18:46.085 00:52:38 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:46.085 00:52:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:46.085 00:52:38 -- common/autotest_common.sh@10 -- # set +x 00:18:46.085 Malloc1 00:18:46.085 [2024-04-27 00:52:38.619903] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.085 Malloc2 00:18:46.085 Malloc3 00:18:46.085 Malloc4 00:18:46.085 Malloc5 00:18:46.345 Malloc6 00:18:46.345 Malloc7 00:18:46.345 Malloc8 00:18:46.345 Malloc9 00:18:46.345 Malloc10 00:18:46.345 00:52:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:46.345 00:52:39 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:46.345 00:52:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:46.345 00:52:39 -- common/autotest_common.sh@10 -- # set +x 00:18:46.605 00:52:39 -- target/shutdown.sh@78 -- # perfpid=1726558 00:18:46.605 00:52:39 -- target/shutdown.sh@79 -- # waitforlisten 1726558 /var/tmp/bdevperf.sock 00:18:46.605 00:52:39 -- common/autotest_common.sh@817 -- # '[' -z 1726558 ']' 00:18:46.605 00:52:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.605 00:52:39 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:18:46.605 00:52:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:46.605 00:52:39 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:46.605 00:52:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.605 00:52:39 -- nvmf/common.sh@521 -- # config=() 00:18:46.605 00:52:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:46.605 00:52:39 -- nvmf/common.sh@521 -- # local subsystem config 00:18:46.605 00:52:39 -- common/autotest_common.sh@10 -- # set +x 00:18:46.605 00:52:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:46.605 00:52:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:46.605 { 00:18:46.605 "params": { 00:18:46.605 "name": "Nvme$subsystem", 00:18:46.605 "trtype": "$TEST_TRANSPORT", 00:18:46.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.605 "adrfam": "ipv4", 00:18:46.605 "trsvcid": "$NVMF_PORT", 00:18:46.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.605 "hdgst": ${hdgst:-false}, 00:18:46.605 "ddgst": ${ddgst:-false} 00:18:46.605 }, 00:18:46.605 "method": "bdev_nvme_attach_controller" 00:18:46.605 } 00:18:46.605 EOF 00:18:46.605 )") 00:18:46.605 00:52:39 -- nvmf/common.sh@543 -- # cat 00:18:46.605 00:52:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:46.605 00:52:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:46.605 { 00:18:46.605 "params": { 00:18:46.605 "name": "Nvme$subsystem", 00:18:46.605 "trtype": "$TEST_TRANSPORT", 00:18:46.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "$NVMF_PORT", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.606 "hdgst": ${hdgst:-false}, 00:18:46.606 "ddgst": ${ddgst:-false} 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 } 00:18:46.606 EOF 00:18:46.606 )") 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # cat 00:18:46.606 00:52:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:46.606 { 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme$subsystem", 00:18:46.606 "trtype": "$TEST_TRANSPORT", 00:18:46.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "$NVMF_PORT", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.606 "hdgst": ${hdgst:-false}, 00:18:46.606 "ddgst": ${ddgst:-false} 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 } 00:18:46.606 EOF 00:18:46.606 )") 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # cat 00:18:46.606 00:52:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:46.606 { 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme$subsystem", 00:18:46.606 "trtype": "$TEST_TRANSPORT", 00:18:46.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "$NVMF_PORT", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.606 "hdgst": ${hdgst:-false}, 00:18:46.606 "ddgst": ${ddgst:-false} 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 } 00:18:46.606 EOF 00:18:46.606 )") 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # cat 00:18:46.606 00:52:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:46.606 { 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme$subsystem", 00:18:46.606 "trtype": "$TEST_TRANSPORT", 00:18:46.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "$NVMF_PORT", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.606 "hdgst": ${hdgst:-false}, 00:18:46.606 "ddgst": ${ddgst:-false} 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 } 00:18:46.606 EOF 00:18:46.606 )") 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # cat 00:18:46.606 00:52:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:46.606 { 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme$subsystem", 00:18:46.606 "trtype": "$TEST_TRANSPORT", 00:18:46.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "$NVMF_PORT", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.606 "hdgst": ${hdgst:-false}, 00:18:46.606 "ddgst": ${ddgst:-false} 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 } 00:18:46.606 EOF 00:18:46.606 )") 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # cat 00:18:46.606 [2024-04-27 00:52:39.087343] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:46.606 [2024-04-27 00:52:39.087392] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:46.606 00:52:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:46.606 { 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme$subsystem", 00:18:46.606 "trtype": "$TEST_TRANSPORT", 00:18:46.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "$NVMF_PORT", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.606 "hdgst": ${hdgst:-false}, 00:18:46.606 "ddgst": ${ddgst:-false} 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 } 00:18:46.606 EOF 00:18:46.606 )") 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # cat 00:18:46.606 00:52:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:46.606 { 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme$subsystem", 00:18:46.606 "trtype": "$TEST_TRANSPORT", 00:18:46.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "$NVMF_PORT", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.606 "hdgst": ${hdgst:-false}, 00:18:46.606 "ddgst": ${ddgst:-false} 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 } 00:18:46.606 EOF 00:18:46.606 )") 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # cat 00:18:46.606 00:52:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:46.606 { 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme$subsystem", 00:18:46.606 "trtype": "$TEST_TRANSPORT", 00:18:46.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "$NVMF_PORT", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.606 "hdgst": ${hdgst:-false}, 00:18:46.606 "ddgst": ${ddgst:-false} 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 } 00:18:46.606 EOF 00:18:46.606 )") 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # cat 00:18:46.606 00:52:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:46.606 { 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme$subsystem", 00:18:46.606 "trtype": "$TEST_TRANSPORT", 00:18:46.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "$NVMF_PORT", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.606 "hdgst": ${hdgst:-false}, 00:18:46.606 "ddgst": ${ddgst:-false} 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 } 00:18:46.606 EOF 00:18:46.606 )") 00:18:46.606 00:52:39 -- nvmf/common.sh@543 -- # cat 00:18:46.606 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.606 00:52:39 -- nvmf/common.sh@545 -- # jq . 00:18:46.606 00:52:39 -- nvmf/common.sh@546 -- # IFS=, 00:18:46.606 00:52:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme1", 00:18:46.606 "trtype": "tcp", 00:18:46.606 "traddr": "10.0.0.2", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "4420", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.606 "hdgst": false, 00:18:46.606 "ddgst": false 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 },{ 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme2", 00:18:46.606 "trtype": "tcp", 00:18:46.606 "traddr": "10.0.0.2", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "4420", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:46.606 "hdgst": false, 00:18:46.606 "ddgst": false 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 },{ 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme3", 00:18:46.606 "trtype": "tcp", 00:18:46.606 "traddr": "10.0.0.2", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "4420", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:46.606 "hdgst": false, 00:18:46.606 "ddgst": false 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 },{ 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme4", 00:18:46.606 "trtype": "tcp", 00:18:46.606 "traddr": "10.0.0.2", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "4420", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:46.606 "hdgst": false, 00:18:46.606 "ddgst": false 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 },{ 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme5", 00:18:46.606 "trtype": "tcp", 00:18:46.606 "traddr": "10.0.0.2", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "4420", 00:18:46.606 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:46.606 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:46.606 "hdgst": false, 00:18:46.606 "ddgst": false 00:18:46.606 }, 00:18:46.606 "method": "bdev_nvme_attach_controller" 00:18:46.606 },{ 00:18:46.606 "params": { 00:18:46.606 "name": "Nvme6", 00:18:46.606 "trtype": "tcp", 00:18:46.606 "traddr": "10.0.0.2", 00:18:46.606 "adrfam": "ipv4", 00:18:46.606 "trsvcid": "4420", 00:18:46.607 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:46.607 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:46.607 "hdgst": false, 00:18:46.607 "ddgst": false 00:18:46.607 }, 00:18:46.607 "method": "bdev_nvme_attach_controller" 00:18:46.607 },{ 00:18:46.607 "params": { 00:18:46.607 "name": "Nvme7", 00:18:46.607 "trtype": "tcp", 00:18:46.607 "traddr": "10.0.0.2", 00:18:46.607 "adrfam": "ipv4", 00:18:46.607 "trsvcid": "4420", 00:18:46.607 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:46.607 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:46.607 "hdgst": false, 00:18:46.607 "ddgst": false 00:18:46.607 }, 00:18:46.607 "method": "bdev_nvme_attach_controller" 00:18:46.607 },{ 00:18:46.607 "params": { 00:18:46.607 "name": "Nvme8", 00:18:46.607 "trtype": "tcp", 00:18:46.607 "traddr": "10.0.0.2", 00:18:46.607 "adrfam": "ipv4", 00:18:46.607 "trsvcid": "4420", 00:18:46.607 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:46.607 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:46.607 "hdgst": false, 00:18:46.607 "ddgst": false 00:18:46.607 }, 00:18:46.607 "method": "bdev_nvme_attach_controller" 00:18:46.607 },{ 00:18:46.607 "params": { 00:18:46.607 "name": "Nvme9", 00:18:46.607 "trtype": "tcp", 00:18:46.607 "traddr": "10.0.0.2", 00:18:46.607 "adrfam": "ipv4", 00:18:46.607 "trsvcid": "4420", 00:18:46.607 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:46.607 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:46.607 "hdgst": false, 00:18:46.607 "ddgst": false 00:18:46.607 }, 00:18:46.607 "method": "bdev_nvme_attach_controller" 00:18:46.607 },{ 00:18:46.607 "params": { 00:18:46.607 "name": "Nvme10", 00:18:46.607 "trtype": "tcp", 00:18:46.607 "traddr": "10.0.0.2", 00:18:46.607 "adrfam": "ipv4", 00:18:46.607 "trsvcid": "4420", 00:18:46.607 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:46.607 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:46.607 "hdgst": false, 00:18:46.607 "ddgst": false 00:18:46.607 }, 00:18:46.607 "method": "bdev_nvme_attach_controller" 00:18:46.607 }' 00:18:46.607 [2024-04-27 00:52:39.144739] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.607 [2024-04-27 00:52:39.215723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.988 00:52:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:47.988 00:52:40 -- common/autotest_common.sh@850 -- # return 0 00:18:47.988 00:52:40 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:47.988 00:52:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.988 00:52:40 -- common/autotest_common.sh@10 -- # set +x 00:18:47.988 00:52:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.988 00:52:40 -- target/shutdown.sh@83 -- # kill -9 1726558 00:18:47.988 00:52:40 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:18:47.988 00:52:40 -- target/shutdown.sh@87 -- # sleep 1 00:18:48.927 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1726558 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:18:48.927 00:52:41 -- target/shutdown.sh@88 -- # kill -0 1726275 00:18:48.927 00:52:41 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:48.927 00:52:41 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:48.927 00:52:41 -- nvmf/common.sh@521 -- # config=() 00:18:48.927 00:52:41 -- nvmf/common.sh@521 -- # local subsystem config 00:18:48.927 00:52:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:48.927 00:52:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:48.927 { 00:18:48.927 "params": { 00:18:48.927 "name": "Nvme$subsystem", 00:18:48.927 "trtype": "$TEST_TRANSPORT", 00:18:48.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.927 "adrfam": "ipv4", 00:18:48.927 "trsvcid": "$NVMF_PORT", 00:18:48.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.927 "hdgst": ${hdgst:-false}, 00:18:48.927 "ddgst": ${ddgst:-false} 00:18:48.927 }, 00:18:48.927 "method": "bdev_nvme_attach_controller" 00:18:48.927 } 00:18:48.927 EOF 00:18:48.927 )") 00:18:48.927 00:52:41 -- nvmf/common.sh@543 -- # cat 00:18:48.927 00:52:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:48.927 00:52:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:48.927 { 00:18:48.927 "params": { 00:18:48.927 "name": "Nvme$subsystem", 00:18:48.927 "trtype": "$TEST_TRANSPORT", 00:18:48.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.927 "adrfam": "ipv4", 00:18:48.927 "trsvcid": "$NVMF_PORT", 00:18:48.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.927 "hdgst": ${hdgst:-false}, 00:18:48.927 "ddgst": ${ddgst:-false} 00:18:48.927 }, 00:18:48.927 "method": "bdev_nvme_attach_controller" 00:18:48.927 } 00:18:48.927 EOF 00:18:48.927 )") 00:18:48.927 00:52:41 -- nvmf/common.sh@543 -- # cat 00:18:48.927 00:52:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:48.927 00:52:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:48.927 { 00:18:48.927 "params": { 00:18:48.927 "name": "Nvme$subsystem", 00:18:48.927 "trtype": "$TEST_TRANSPORT", 00:18:48.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.927 "adrfam": "ipv4", 00:18:48.927 "trsvcid": "$NVMF_PORT", 00:18:48.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.927 "hdgst": ${hdgst:-false}, 00:18:48.928 "ddgst": ${ddgst:-false} 00:18:48.928 }, 00:18:48.928 "method": "bdev_nvme_attach_controller" 00:18:48.928 } 00:18:48.928 EOF 00:18:48.928 )") 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # cat 00:18:48.928 00:52:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:48.928 { 00:18:48.928 "params": { 00:18:48.928 "name": "Nvme$subsystem", 00:18:48.928 "trtype": "$TEST_TRANSPORT", 00:18:48.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.928 "adrfam": "ipv4", 00:18:48.928 "trsvcid": "$NVMF_PORT", 00:18:48.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.928 "hdgst": ${hdgst:-false}, 00:18:48.928 "ddgst": ${ddgst:-false} 00:18:48.928 }, 00:18:48.928 "method": "bdev_nvme_attach_controller" 00:18:48.928 } 00:18:48.928 EOF 00:18:48.928 )") 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # cat 00:18:48.928 00:52:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:48.928 { 00:18:48.928 "params": { 00:18:48.928 "name": "Nvme$subsystem", 00:18:48.928 "trtype": "$TEST_TRANSPORT", 00:18:48.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.928 "adrfam": "ipv4", 00:18:48.928 "trsvcid": "$NVMF_PORT", 00:18:48.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.928 "hdgst": ${hdgst:-false}, 00:18:48.928 "ddgst": ${ddgst:-false} 00:18:48.928 }, 00:18:48.928 "method": "bdev_nvme_attach_controller" 00:18:48.928 } 00:18:48.928 EOF 00:18:48.928 )") 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # cat 00:18:48.928 00:52:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:48.928 { 00:18:48.928 "params": { 00:18:48.928 "name": "Nvme$subsystem", 00:18:48.928 "trtype": "$TEST_TRANSPORT", 00:18:48.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.928 "adrfam": "ipv4", 00:18:48.928 "trsvcid": "$NVMF_PORT", 00:18:48.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.928 "hdgst": ${hdgst:-false}, 00:18:48.928 "ddgst": ${ddgst:-false} 00:18:48.928 }, 00:18:48.928 "method": "bdev_nvme_attach_controller" 00:18:48.928 } 00:18:48.928 EOF 00:18:48.928 )") 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # cat 00:18:48.928 [2024-04-27 00:52:41.606667] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:48.928 00:52:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:48.928 [2024-04-27 00:52:41.606716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1726959 ] 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:48.928 { 00:18:48.928 "params": { 00:18:48.928 "name": "Nvme$subsystem", 00:18:48.928 "trtype": "$TEST_TRANSPORT", 00:18:48.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.928 "adrfam": "ipv4", 00:18:48.928 "trsvcid": "$NVMF_PORT", 00:18:48.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.928 "hdgst": ${hdgst:-false}, 00:18:48.928 "ddgst": ${ddgst:-false} 00:18:48.928 }, 00:18:48.928 "method": "bdev_nvme_attach_controller" 00:18:48.928 } 00:18:48.928 EOF 00:18:48.928 )") 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # cat 00:18:48.928 00:52:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:48.928 { 00:18:48.928 "params": { 00:18:48.928 "name": "Nvme$subsystem", 00:18:48.928 "trtype": "$TEST_TRANSPORT", 00:18:48.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.928 "adrfam": "ipv4", 00:18:48.928 "trsvcid": "$NVMF_PORT", 00:18:48.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.928 "hdgst": ${hdgst:-false}, 00:18:48.928 "ddgst": ${ddgst:-false} 00:18:48.928 }, 00:18:48.928 "method": "bdev_nvme_attach_controller" 00:18:48.928 } 00:18:48.928 EOF 00:18:48.928 )") 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # cat 00:18:48.928 00:52:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:48.928 { 00:18:48.928 "params": { 00:18:48.928 "name": "Nvme$subsystem", 00:18:48.928 "trtype": "$TEST_TRANSPORT", 00:18:48.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.928 "adrfam": "ipv4", 00:18:48.928 "trsvcid": "$NVMF_PORT", 00:18:48.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.928 "hdgst": ${hdgst:-false}, 00:18:48.928 "ddgst": ${ddgst:-false} 00:18:48.928 }, 00:18:48.928 "method": "bdev_nvme_attach_controller" 00:18:48.928 } 00:18:48.928 EOF 00:18:48.928 )") 00:18:48.928 00:52:41 -- nvmf/common.sh@543 -- # cat 00:18:49.189 00:52:41 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:49.189 00:52:41 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:49.189 { 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme$subsystem", 00:18:49.189 "trtype": "$TEST_TRANSPORT", 00:18:49.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "$NVMF_PORT", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.189 "hdgst": ${hdgst:-false}, 00:18:49.189 "ddgst": ${ddgst:-false} 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 } 00:18:49.189 EOF 00:18:49.189 )") 00:18:49.189 00:52:41 -- nvmf/common.sh@543 -- # cat 00:18:49.189 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.189 00:52:41 -- nvmf/common.sh@545 -- # jq . 00:18:49.189 00:52:41 -- nvmf/common.sh@546 -- # IFS=, 00:18:49.189 00:52:41 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme1", 00:18:49.189 "trtype": "tcp", 00:18:49.189 "traddr": "10.0.0.2", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "4420", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.189 "hdgst": false, 00:18:49.189 "ddgst": false 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 },{ 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme2", 00:18:49.189 "trtype": "tcp", 00:18:49.189 "traddr": "10.0.0.2", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "4420", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:49.189 "hdgst": false, 00:18:49.189 "ddgst": false 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 },{ 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme3", 00:18:49.189 "trtype": "tcp", 00:18:49.189 "traddr": "10.0.0.2", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "4420", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:49.189 "hdgst": false, 00:18:49.189 "ddgst": false 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 },{ 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme4", 00:18:49.189 "trtype": "tcp", 00:18:49.189 "traddr": "10.0.0.2", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "4420", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:49.189 "hdgst": false, 00:18:49.189 "ddgst": false 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 },{ 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme5", 00:18:49.189 "trtype": "tcp", 00:18:49.189 "traddr": "10.0.0.2", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "4420", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:49.189 "hdgst": false, 00:18:49.189 "ddgst": false 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 },{ 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme6", 00:18:49.189 "trtype": "tcp", 00:18:49.189 "traddr": "10.0.0.2", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "4420", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:49.189 "hdgst": false, 00:18:49.189 "ddgst": false 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 },{ 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme7", 00:18:49.189 "trtype": "tcp", 00:18:49.189 "traddr": "10.0.0.2", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "4420", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:49.189 "hdgst": false, 00:18:49.189 "ddgst": false 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 },{ 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme8", 00:18:49.189 "trtype": "tcp", 00:18:49.189 "traddr": "10.0.0.2", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "4420", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:49.189 "hdgst": false, 00:18:49.189 "ddgst": false 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 },{ 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme9", 00:18:49.189 "trtype": "tcp", 00:18:49.189 "traddr": "10.0.0.2", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "4420", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:49.189 "hdgst": false, 00:18:49.189 "ddgst": false 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 },{ 00:18:49.189 "params": { 00:18:49.189 "name": "Nvme10", 00:18:49.189 "trtype": "tcp", 00:18:49.189 "traddr": "10.0.0.2", 00:18:49.189 "adrfam": "ipv4", 00:18:49.189 "trsvcid": "4420", 00:18:49.189 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:49.189 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:49.189 "hdgst": false, 00:18:49.189 "ddgst": false 00:18:49.189 }, 00:18:49.189 "method": "bdev_nvme_attach_controller" 00:18:49.189 }' 00:18:49.189 [2024-04-27 00:52:41.662257] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.189 [2024-04-27 00:52:41.733633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.567 Running I/O for 1 seconds... 00:18:51.954 00:18:51.954 Latency(us) 00:18:51.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.954 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:51.954 Verification LBA range: start 0x0 length 0x400 00:18:51.954 Nvme1n1 : 1.15 279.39 17.46 0.00 0.00 226907.18 20515.62 230686.72 00:18:51.954 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:51.954 Verification LBA range: start 0x0 length 0x400 00:18:51.954 Nvme2n1 : 1.15 222.59 13.91 0.00 0.00 281292.35 20059.71 253481.85 00:18:51.954 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:51.954 Verification LBA range: start 0x0 length 0x400 00:18:51.954 Nvme3n1 : 1.13 227.47 14.22 0.00 0.00 270823.74 21997.30 228863.11 00:18:51.954 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:51.955 Verification LBA range: start 0x0 length 0x400 00:18:51.955 Nvme4n1 : 1.15 221.95 13.87 0.00 0.00 273026.45 24048.86 273541.57 00:18:51.955 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:51.955 Verification LBA range: start 0x0 length 0x400 00:18:51.955 Nvme5n1 : 1.15 223.35 13.96 0.00 0.00 267854.80 25416.57 235245.75 00:18:51.955 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:51.955 Verification LBA range: start 0x0 length 0x400 00:18:51.955 Nvme6n1 : 1.12 228.17 14.26 0.00 0.00 257813.82 35788.35 232510.33 00:18:51.955 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:51.955 Verification LBA range: start 0x0 length 0x400 00:18:51.955 Nvme7n1 : 1.17 273.95 17.12 0.00 0.00 212545.18 20629.59 212450.62 00:18:51.955 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:51.955 Verification LBA range: start 0x0 length 0x400 00:18:51.955 Nvme8n1 : 1.16 278.79 17.42 0.00 0.00 205557.38 1866.35 235245.75 00:18:51.955 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:51.955 Verification LBA range: start 0x0 length 0x400 00:18:51.955 Nvme9n1 : 1.21 318.50 19.91 0.00 0.00 171711.30 18805.98 215186.03 00:18:51.955 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:51.955 Verification LBA range: start 0x0 length 0x400 00:18:51.955 Nvme10n1 : 1.17 273.48 17.09 0.00 0.00 203395.43 16640.45 238892.97 00:18:51.955 =================================================================================================================== 00:18:51.955 Total : 2547.63 159.23 0.00 0.00 232040.93 1866.35 273541.57 00:18:51.955 00:52:44 -- target/shutdown.sh@94 -- # stoptarget 00:18:51.955 00:52:44 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:18:51.955 00:52:44 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:51.955 00:52:44 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:51.955 00:52:44 -- target/shutdown.sh@45 -- # nvmftestfini 00:18:51.955 00:52:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:51.955 00:52:44 -- nvmf/common.sh@117 -- # sync 00:18:51.955 00:52:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.955 00:52:44 -- nvmf/common.sh@120 -- # set +e 00:18:51.955 00:52:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.955 00:52:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.955 rmmod nvme_tcp 00:18:51.955 rmmod nvme_fabrics 00:18:51.955 rmmod nvme_keyring 00:18:51.955 00:52:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.955 00:52:44 -- nvmf/common.sh@124 -- # set -e 00:18:51.955 00:52:44 -- nvmf/common.sh@125 -- # return 0 00:18:51.955 00:52:44 -- nvmf/common.sh@478 -- # '[' -n 1726275 ']' 00:18:51.955 00:52:44 -- nvmf/common.sh@479 -- # killprocess 1726275 00:18:51.955 00:52:44 -- common/autotest_common.sh@936 -- # '[' -z 1726275 ']' 00:18:51.955 00:52:44 -- common/autotest_common.sh@940 -- # kill -0 1726275 00:18:51.955 00:52:44 -- common/autotest_common.sh@941 -- # uname 00:18:51.955 00:52:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:51.955 00:52:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1726275 00:18:52.223 00:52:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:52.223 00:52:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:52.223 00:52:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1726275' 00:18:52.223 killing process with pid 1726275 00:18:52.223 00:52:44 -- common/autotest_common.sh@955 -- # kill 1726275 00:18:52.223 00:52:44 -- common/autotest_common.sh@960 -- # wait 1726275 00:18:52.482 00:52:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:52.482 00:52:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:52.482 00:52:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:52.482 00:52:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.482 00:52:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:52.482 00:52:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.482 00:52:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.482 00:52:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.015 00:52:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:55.015 00:18:55.015 real 0m14.837s 00:18:55.015 user 0m34.245s 00:18:55.015 sys 0m5.320s 00:18:55.015 00:52:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:55.015 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:18:55.015 ************************************ 00:18:55.015 END TEST nvmf_shutdown_tc1 00:18:55.015 ************************************ 00:18:55.015 00:52:47 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:18:55.015 00:52:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:55.015 00:52:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:55.015 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:18:55.015 ************************************ 00:18:55.015 START TEST nvmf_shutdown_tc2 00:18:55.015 ************************************ 00:18:55.015 00:52:47 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:18:55.015 00:52:47 -- target/shutdown.sh@99 -- # starttarget 00:18:55.015 00:52:47 -- target/shutdown.sh@15 -- # nvmftestinit 00:18:55.015 00:52:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:55.015 00:52:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.015 00:52:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:55.015 00:52:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:55.015 00:52:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:55.015 00:52:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.015 00:52:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.015 00:52:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.015 00:52:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:55.015 00:52:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:55.015 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:18:55.015 00:52:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:55.015 00:52:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.015 00:52:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.015 00:52:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.015 00:52:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.015 00:52:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.015 00:52:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.015 00:52:47 -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.015 00:52:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.015 00:52:47 -- nvmf/common.sh@296 -- # e810=() 00:18:55.015 00:52:47 -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.015 00:52:47 -- nvmf/common.sh@297 -- # x722=() 00:18:55.015 00:52:47 -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.015 00:52:47 -- nvmf/common.sh@298 -- # mlx=() 00:18:55.015 00:52:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.015 00:52:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.015 00:52:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.015 00:52:47 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:55.015 00:52:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.015 00:52:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.015 00:52:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:55.015 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:55.015 00:52:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.015 00:52:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:55.015 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:55.015 00:52:47 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.015 00:52:47 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.016 00:52:47 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.016 00:52:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.016 00:52:47 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:55.016 00:52:47 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:55.016 00:52:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.016 00:52:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.016 00:52:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:55.016 00:52:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.016 00:52:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:55.016 Found net devices under 0000:86:00.0: cvl_0_0 00:18:55.016 00:52:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.016 00:52:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.016 00:52:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.016 00:52:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:55.016 00:52:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.016 00:52:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:55.016 Found net devices under 0000:86:00.1: cvl_0_1 00:18:55.016 00:52:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.016 00:52:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:55.016 00:52:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:55.016 00:52:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:55.016 00:52:47 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:55.016 00:52:47 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:55.016 00:52:47 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.016 00:52:47 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.016 00:52:47 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.016 00:52:47 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:55.016 00:52:47 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.016 00:52:47 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.016 00:52:47 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:55.016 00:52:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.016 00:52:47 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.016 00:52:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:55.016 00:52:47 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:55.016 00:52:47 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.016 00:52:47 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.016 00:52:47 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.016 00:52:47 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.016 00:52:47 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:55.016 00:52:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.016 00:52:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.016 00:52:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.016 00:52:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:55.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:18:55.016 00:18:55.016 --- 10.0.0.2 ping statistics --- 00:18:55.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.016 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:18:55.016 00:52:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.422 ms 00:18:55.016 00:18:55.016 --- 10.0.0.1 ping statistics --- 00:18:55.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.016 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:18:55.016 00:52:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.016 00:52:47 -- nvmf/common.sh@411 -- # return 0 00:18:55.016 00:52:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:55.016 00:52:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.016 00:52:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:55.016 00:52:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:55.016 00:52:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.016 00:52:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:55.016 00:52:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:55.016 00:52:47 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:18:55.016 00:52:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:55.016 00:52:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:55.016 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:18:55.016 00:52:47 -- nvmf/common.sh@470 -- # nvmfpid=1728204 00:18:55.016 00:52:47 -- nvmf/common.sh@471 -- # waitforlisten 1728204 00:18:55.016 00:52:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:55.016 00:52:47 -- common/autotest_common.sh@817 -- # '[' -z 1728204 ']' 00:18:55.016 00:52:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.016 00:52:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:55.016 00:52:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.016 00:52:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:55.016 00:52:47 -- common/autotest_common.sh@10 -- # set +x 00:18:55.016 [2024-04-27 00:52:47.630332] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:55.016 [2024-04-27 00:52:47.630376] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.016 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.016 [2024-04-27 00:52:47.682339] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.274 [2024-04-27 00:52:47.761845] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.274 [2024-04-27 00:52:47.761882] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.274 [2024-04-27 00:52:47.761889] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.274 [2024-04-27 00:52:47.761895] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.274 [2024-04-27 00:52:47.761900] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.274 [2024-04-27 00:52:47.761997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.274 [2024-04-27 00:52:47.762058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.274 [2024-04-27 00:52:47.762167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.274 [2024-04-27 00:52:47.762168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:55.842 00:52:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:55.842 00:52:48 -- common/autotest_common.sh@850 -- # return 0 00:18:55.842 00:52:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:55.842 00:52:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:55.842 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:18:55.842 00:52:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.842 00:52:48 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:55.842 00:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:55.842 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:18:55.842 [2024-04-27 00:52:48.487023] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.842 00:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:55.842 00:52:48 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:18:55.842 00:52:48 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:18:55.842 00:52:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:55.842 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:18:55.842 00:52:48 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:55.842 00:52:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:55.842 00:52:48 -- target/shutdown.sh@28 -- # cat 00:18:55.842 00:52:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:55.842 00:52:48 -- target/shutdown.sh@28 -- # cat 00:18:55.842 00:52:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:55.842 00:52:48 -- target/shutdown.sh@28 -- # cat 00:18:55.842 00:52:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:55.842 00:52:48 -- target/shutdown.sh@28 -- # cat 00:18:55.842 00:52:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:55.842 00:52:48 -- target/shutdown.sh@28 -- # cat 00:18:55.842 00:52:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:55.842 00:52:48 -- target/shutdown.sh@28 -- # cat 00:18:55.842 00:52:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:55.842 00:52:48 -- target/shutdown.sh@28 -- # cat 00:18:55.842 00:52:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:55.842 00:52:48 -- target/shutdown.sh@28 -- # cat 00:18:55.842 00:52:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:55.842 00:52:48 -- target/shutdown.sh@28 -- # cat 00:18:56.101 00:52:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:18:56.101 00:52:48 -- target/shutdown.sh@28 -- # cat 00:18:56.101 00:52:48 -- target/shutdown.sh@35 -- # rpc_cmd 00:18:56.101 00:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.101 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:18:56.101 Malloc1 00:18:56.101 [2024-04-27 00:52:48.582943] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.101 Malloc2 00:18:56.101 Malloc3 00:18:56.101 Malloc4 00:18:56.101 Malloc5 00:18:56.101 Malloc6 00:18:56.361 Malloc7 00:18:56.361 Malloc8 00:18:56.361 Malloc9 00:18:56.361 Malloc10 00:18:56.361 00:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.361 00:52:48 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:18:56.361 00:52:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:56.361 00:52:48 -- common/autotest_common.sh@10 -- # set +x 00:18:56.361 00:52:49 -- target/shutdown.sh@103 -- # perfpid=1728482 00:18:56.361 00:52:49 -- target/shutdown.sh@104 -- # waitforlisten 1728482 /var/tmp/bdevperf.sock 00:18:56.361 00:52:49 -- common/autotest_common.sh@817 -- # '[' -z 1728482 ']' 00:18:56.361 00:52:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.361 00:52:49 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:56.361 00:52:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:56.361 00:52:49 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:56.361 00:52:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.361 00:52:49 -- nvmf/common.sh@521 -- # config=() 00:18:56.361 00:52:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:56.361 00:52:49 -- nvmf/common.sh@521 -- # local subsystem config 00:18:56.361 00:52:49 -- common/autotest_common.sh@10 -- # set +x 00:18:56.361 00:52:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:56.361 00:52:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:56.361 { 00:18:56.361 "params": { 00:18:56.361 "name": "Nvme$subsystem", 00:18:56.361 "trtype": "$TEST_TRANSPORT", 00:18:56.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.361 "adrfam": "ipv4", 00:18:56.361 "trsvcid": "$NVMF_PORT", 00:18:56.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.361 "hdgst": ${hdgst:-false}, 00:18:56.361 "ddgst": ${ddgst:-false} 00:18:56.361 }, 00:18:56.361 "method": "bdev_nvme_attach_controller" 00:18:56.361 } 00:18:56.361 EOF 00:18:56.361 )") 00:18:56.361 00:52:49 -- nvmf/common.sh@543 -- # cat 00:18:56.361 00:52:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:56.361 00:52:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:56.361 { 00:18:56.361 "params": { 00:18:56.361 "name": "Nvme$subsystem", 00:18:56.361 "trtype": "$TEST_TRANSPORT", 00:18:56.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.361 "adrfam": "ipv4", 00:18:56.361 "trsvcid": "$NVMF_PORT", 00:18:56.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.361 "hdgst": ${hdgst:-false}, 00:18:56.361 "ddgst": ${ddgst:-false} 00:18:56.361 }, 00:18:56.361 "method": "bdev_nvme_attach_controller" 00:18:56.361 } 00:18:56.361 EOF 00:18:56.361 )") 00:18:56.361 00:52:49 -- nvmf/common.sh@543 -- # cat 00:18:56.361 00:52:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:56.361 00:52:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:56.361 { 00:18:56.361 "params": { 00:18:56.361 "name": "Nvme$subsystem", 00:18:56.361 "trtype": "$TEST_TRANSPORT", 00:18:56.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.361 "adrfam": "ipv4", 00:18:56.361 "trsvcid": "$NVMF_PORT", 00:18:56.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.361 "hdgst": ${hdgst:-false}, 00:18:56.361 "ddgst": ${ddgst:-false} 00:18:56.361 }, 00:18:56.361 "method": "bdev_nvme_attach_controller" 00:18:56.361 } 00:18:56.361 EOF 00:18:56.361 )") 00:18:56.361 00:52:49 -- nvmf/common.sh@543 -- # cat 00:18:56.361 00:52:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:56.361 00:52:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:56.361 { 00:18:56.361 "params": { 00:18:56.361 "name": "Nvme$subsystem", 00:18:56.361 "trtype": "$TEST_TRANSPORT", 00:18:56.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.361 "adrfam": "ipv4", 00:18:56.361 "trsvcid": "$NVMF_PORT", 00:18:56.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.361 "hdgst": ${hdgst:-false}, 00:18:56.361 "ddgst": ${ddgst:-false} 00:18:56.361 }, 00:18:56.361 "method": "bdev_nvme_attach_controller" 00:18:56.361 } 00:18:56.361 EOF 00:18:56.361 )") 00:18:56.361 00:52:49 -- nvmf/common.sh@543 -- # cat 00:18:56.361 00:52:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:56.361 00:52:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:56.361 { 00:18:56.361 "params": { 00:18:56.361 "name": "Nvme$subsystem", 00:18:56.361 "trtype": "$TEST_TRANSPORT", 00:18:56.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.361 "adrfam": "ipv4", 00:18:56.361 "trsvcid": "$NVMF_PORT", 00:18:56.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.361 "hdgst": ${hdgst:-false}, 00:18:56.361 "ddgst": ${ddgst:-false} 00:18:56.361 }, 00:18:56.362 "method": "bdev_nvme_attach_controller" 00:18:56.362 } 00:18:56.362 EOF 00:18:56.362 )") 00:18:56.362 00:52:49 -- nvmf/common.sh@543 -- # cat 00:18:56.362 00:52:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:56.362 00:52:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:56.362 { 00:18:56.362 "params": { 00:18:56.362 "name": "Nvme$subsystem", 00:18:56.362 "trtype": "$TEST_TRANSPORT", 00:18:56.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.362 "adrfam": "ipv4", 00:18:56.362 "trsvcid": "$NVMF_PORT", 00:18:56.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.362 "hdgst": ${hdgst:-false}, 00:18:56.362 "ddgst": ${ddgst:-false} 00:18:56.362 }, 00:18:56.362 "method": "bdev_nvme_attach_controller" 00:18:56.362 } 00:18:56.362 EOF 00:18:56.362 )") 00:18:56.362 00:52:49 -- nvmf/common.sh@543 -- # cat 00:18:56.362 00:52:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:56.362 00:52:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:56.362 { 00:18:56.362 "params": { 00:18:56.362 "name": "Nvme$subsystem", 00:18:56.362 "trtype": "$TEST_TRANSPORT", 00:18:56.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.362 "adrfam": "ipv4", 00:18:56.362 "trsvcid": "$NVMF_PORT", 00:18:56.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.362 "hdgst": ${hdgst:-false}, 00:18:56.362 "ddgst": ${ddgst:-false} 00:18:56.362 }, 00:18:56.362 "method": "bdev_nvme_attach_controller" 00:18:56.362 } 00:18:56.362 EOF 00:18:56.362 )") 00:18:56.622 [2024-04-27 00:52:49.057024] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:56.622 [2024-04-27 00:52:49.057076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1728482 ] 00:18:56.622 00:52:49 -- nvmf/common.sh@543 -- # cat 00:18:56.622 00:52:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:56.622 00:52:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:56.622 { 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme$subsystem", 00:18:56.622 "trtype": "$TEST_TRANSPORT", 00:18:56.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "$NVMF_PORT", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.622 "hdgst": ${hdgst:-false}, 00:18:56.622 "ddgst": ${ddgst:-false} 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 } 00:18:56.622 EOF 00:18:56.622 )") 00:18:56.622 00:52:49 -- nvmf/common.sh@543 -- # cat 00:18:56.622 00:52:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:56.622 00:52:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:56.622 { 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme$subsystem", 00:18:56.622 "trtype": "$TEST_TRANSPORT", 00:18:56.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "$NVMF_PORT", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.622 "hdgst": ${hdgst:-false}, 00:18:56.622 "ddgst": ${ddgst:-false} 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 } 00:18:56.622 EOF 00:18:56.622 )") 00:18:56.622 00:52:49 -- nvmf/common.sh@543 -- # cat 00:18:56.622 00:52:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:56.622 00:52:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:56.622 { 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme$subsystem", 00:18:56.622 "trtype": "$TEST_TRANSPORT", 00:18:56.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "$NVMF_PORT", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.622 "hdgst": ${hdgst:-false}, 00:18:56.622 "ddgst": ${ddgst:-false} 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 } 00:18:56.622 EOF 00:18:56.622 )") 00:18:56.622 00:52:49 -- nvmf/common.sh@543 -- # cat 00:18:56.622 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.622 00:52:49 -- nvmf/common.sh@545 -- # jq . 00:18:56.622 00:52:49 -- nvmf/common.sh@546 -- # IFS=, 00:18:56.622 00:52:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme1", 00:18:56.622 "trtype": "tcp", 00:18:56.622 "traddr": "10.0.0.2", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "4420", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.622 "hdgst": false, 00:18:56.622 "ddgst": false 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 },{ 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme2", 00:18:56.622 "trtype": "tcp", 00:18:56.622 "traddr": "10.0.0.2", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "4420", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:56.622 "hdgst": false, 00:18:56.622 "ddgst": false 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 },{ 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme3", 00:18:56.622 "trtype": "tcp", 00:18:56.622 "traddr": "10.0.0.2", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "4420", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:56.622 "hdgst": false, 00:18:56.622 "ddgst": false 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 },{ 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme4", 00:18:56.622 "trtype": "tcp", 00:18:56.622 "traddr": "10.0.0.2", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "4420", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:56.622 "hdgst": false, 00:18:56.622 "ddgst": false 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 },{ 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme5", 00:18:56.622 "trtype": "tcp", 00:18:56.622 "traddr": "10.0.0.2", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "4420", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:56.622 "hdgst": false, 00:18:56.622 "ddgst": false 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 },{ 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme6", 00:18:56.622 "trtype": "tcp", 00:18:56.622 "traddr": "10.0.0.2", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "4420", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:56.622 "hdgst": false, 00:18:56.622 "ddgst": false 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 },{ 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme7", 00:18:56.622 "trtype": "tcp", 00:18:56.622 "traddr": "10.0.0.2", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "4420", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:56.622 "hdgst": false, 00:18:56.622 "ddgst": false 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 },{ 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme8", 00:18:56.622 "trtype": "tcp", 00:18:56.622 "traddr": "10.0.0.2", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "4420", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:56.622 "hdgst": false, 00:18:56.622 "ddgst": false 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 },{ 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme9", 00:18:56.622 "trtype": "tcp", 00:18:56.622 "traddr": "10.0.0.2", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "4420", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:56.622 "hdgst": false, 00:18:56.622 "ddgst": false 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 },{ 00:18:56.622 "params": { 00:18:56.622 "name": "Nvme10", 00:18:56.622 "trtype": "tcp", 00:18:56.622 "traddr": "10.0.0.2", 00:18:56.622 "adrfam": "ipv4", 00:18:56.622 "trsvcid": "4420", 00:18:56.622 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:56.622 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:56.622 "hdgst": false, 00:18:56.622 "ddgst": false 00:18:56.622 }, 00:18:56.622 "method": "bdev_nvme_attach_controller" 00:18:56.622 }' 00:18:56.622 [2024-04-27 00:52:49.113083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.622 [2024-04-27 00:52:49.184116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.529 Running I/O for 10 seconds... 00:18:58.529 00:52:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:58.529 00:52:50 -- common/autotest_common.sh@850 -- # return 0 00:18:58.529 00:52:50 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:58.529 00:52:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.529 00:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:58.529 00:52:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.529 00:52:50 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:58.529 00:52:50 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:58.529 00:52:50 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:18:58.529 00:52:50 -- target/shutdown.sh@57 -- # local ret=1 00:18:58.529 00:52:50 -- target/shutdown.sh@58 -- # local i 00:18:58.529 00:52:50 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:18:58.529 00:52:50 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:58.529 00:52:50 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:58.529 00:52:50 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:58.529 00:52:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.529 00:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:58.529 00:52:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.529 00:52:50 -- target/shutdown.sh@60 -- # read_io_count=3 00:18:58.529 00:52:50 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:18:58.529 00:52:50 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:58.788 00:52:51 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:58.788 00:52:51 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:58.788 00:52:51 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:58.788 00:52:51 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:58.788 00:52:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.788 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:18:58.788 00:52:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.788 00:52:51 -- target/shutdown.sh@60 -- # read_io_count=67 00:18:58.788 00:52:51 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:18:58.788 00:52:51 -- target/shutdown.sh@67 -- # sleep 0.25 00:18:59.049 00:52:51 -- target/shutdown.sh@59 -- # (( i-- )) 00:18:59.049 00:52:51 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:18:59.049 00:52:51 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:59.049 00:52:51 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:18:59.049 00:52:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:59.049 00:52:51 -- common/autotest_common.sh@10 -- # set +x 00:18:59.049 00:52:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:59.049 00:52:51 -- target/shutdown.sh@60 -- # read_io_count=131 00:18:59.049 00:52:51 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:18:59.049 00:52:51 -- target/shutdown.sh@64 -- # ret=0 00:18:59.049 00:52:51 -- target/shutdown.sh@65 -- # break 00:18:59.049 00:52:51 -- target/shutdown.sh@69 -- # return 0 00:18:59.049 00:52:51 -- target/shutdown.sh@110 -- # killprocess 1728482 00:18:59.049 00:52:51 -- common/autotest_common.sh@936 -- # '[' -z 1728482 ']' 00:18:59.049 00:52:51 -- common/autotest_common.sh@940 -- # kill -0 1728482 00:18:59.049 00:52:51 -- common/autotest_common.sh@941 -- # uname 00:18:59.049 00:52:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:59.049 00:52:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1728482 00:18:59.049 00:52:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:59.049 00:52:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:59.049 00:52:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1728482' 00:18:59.049 killing process with pid 1728482 00:18:59.049 00:52:51 -- common/autotest_common.sh@955 -- # kill 1728482 00:18:59.049 00:52:51 -- common/autotest_common.sh@960 -- # wait 1728482 00:18:59.049 Received shutdown signal, test time was about 0.940685 seconds 00:18:59.049 00:18:59.049 Latency(us) 00:18:59.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:59.049 Verification LBA range: start 0x0 length 0x400 00:18:59.049 Nvme1n1 : 0.90 213.11 13.32 0.00 0.00 297167.17 38295.82 279012.40 00:18:59.049 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:59.049 Verification LBA range: start 0x0 length 0x400 00:18:59.049 Nvme2n1 : 0.87 301.34 18.83 0.00 0.00 205136.05 1795.12 213362.42 00:18:59.049 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:59.049 Verification LBA range: start 0x0 length 0x400 00:18:59.049 Nvme3n1 : 0.88 291.31 18.21 0.00 0.00 209322.07 25188.62 194214.51 00:18:59.049 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:59.049 Verification LBA range: start 0x0 length 0x400 00:18:59.049 Nvme4n1 : 0.90 286.01 17.88 0.00 0.00 209471.00 22111.28 218833.25 00:18:59.049 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:59.049 Verification LBA range: start 0x0 length 0x400 00:18:59.049 Nvme5n1 : 0.88 290.44 18.15 0.00 0.00 202157.86 24504.77 217921.45 00:18:59.049 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:59.049 Verification LBA range: start 0x0 length 0x400 00:18:59.049 Nvme6n1 : 0.91 282.22 17.64 0.00 0.00 204577.17 22453.20 222480.47 00:18:59.049 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:59.049 Verification LBA range: start 0x0 length 0x400 00:18:59.049 Nvme7n1 : 0.94 204.25 12.77 0.00 0.00 265607.79 37839.92 279012.40 00:18:59.049 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:59.049 Verification LBA range: start 0x0 length 0x400 00:18:59.049 Nvme8n1 : 0.92 278.90 17.43 0.00 0.00 199439.58 16982.37 218833.25 00:18:59.049 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:59.049 Verification LBA range: start 0x0 length 0x400 00:18:59.049 Nvme9n1 : 0.91 210.60 13.16 0.00 0.00 258584.34 21085.50 255305.46 00:18:59.049 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:59.049 Verification LBA range: start 0x0 length 0x400 00:18:59.049 Nvme10n1 : 0.92 279.69 17.48 0.00 0.00 190993.36 19375.86 222480.47 00:18:59.049 =================================================================================================================== 00:18:59.049 Total : 2637.86 164.87 0.00 0.00 220197.02 1795.12 279012.40 00:18:59.309 00:52:51 -- target/shutdown.sh@113 -- # sleep 1 00:19:00.686 00:52:52 -- target/shutdown.sh@114 -- # kill -0 1728204 00:19:00.686 00:52:52 -- target/shutdown.sh@116 -- # stoptarget 00:19:00.686 00:52:52 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:00.686 00:52:52 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:00.686 00:52:52 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:00.686 00:52:52 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:00.686 00:52:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:00.686 00:52:52 -- nvmf/common.sh@117 -- # sync 00:19:00.686 00:52:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:00.686 00:52:52 -- nvmf/common.sh@120 -- # set +e 00:19:00.686 00:52:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:00.686 00:52:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:00.686 rmmod nvme_tcp 00:19:00.686 rmmod nvme_fabrics 00:19:00.686 rmmod nvme_keyring 00:19:00.686 00:52:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:00.686 00:52:53 -- nvmf/common.sh@124 -- # set -e 00:19:00.686 00:52:53 -- nvmf/common.sh@125 -- # return 0 00:19:00.686 00:52:53 -- nvmf/common.sh@478 -- # '[' -n 1728204 ']' 00:19:00.686 00:52:53 -- nvmf/common.sh@479 -- # killprocess 1728204 00:19:00.686 00:52:53 -- common/autotest_common.sh@936 -- # '[' -z 1728204 ']' 00:19:00.686 00:52:53 -- common/autotest_common.sh@940 -- # kill -0 1728204 00:19:00.686 00:52:53 -- common/autotest_common.sh@941 -- # uname 00:19:00.686 00:52:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:00.686 00:52:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1728204 00:19:00.686 00:52:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:00.686 00:52:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:00.686 00:52:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1728204' 00:19:00.686 killing process with pid 1728204 00:19:00.687 00:52:53 -- common/autotest_common.sh@955 -- # kill 1728204 00:19:00.687 00:52:53 -- common/autotest_common.sh@960 -- # wait 1728204 00:19:00.947 00:52:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:00.947 00:52:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:00.947 00:52:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:00.947 00:52:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:00.947 00:52:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:00.947 00:52:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.947 00:52:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.947 00:52:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.856 00:52:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:02.856 00:19:02.856 real 0m8.281s 00:19:02.856 user 0m25.517s 00:19:02.856 sys 0m1.396s 00:19:02.856 00:52:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:03.116 00:52:55 -- common/autotest_common.sh@10 -- # set +x 00:19:03.116 ************************************ 00:19:03.116 END TEST nvmf_shutdown_tc2 00:19:03.116 ************************************ 00:19:03.116 00:52:55 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:03.116 00:52:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:03.116 00:52:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:03.116 00:52:55 -- common/autotest_common.sh@10 -- # set +x 00:19:03.116 ************************************ 00:19:03.116 START TEST nvmf_shutdown_tc3 00:19:03.116 ************************************ 00:19:03.116 00:52:55 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:19:03.116 00:52:55 -- target/shutdown.sh@121 -- # starttarget 00:19:03.116 00:52:55 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:03.116 00:52:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:03.116 00:52:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.116 00:52:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:03.116 00:52:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:03.116 00:52:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:03.116 00:52:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.116 00:52:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.116 00:52:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.116 00:52:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:03.116 00:52:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:03.116 00:52:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:03.116 00:52:55 -- common/autotest_common.sh@10 -- # set +x 00:19:03.116 00:52:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:03.116 00:52:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:03.116 00:52:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:03.116 00:52:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:03.116 00:52:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:03.116 00:52:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:03.116 00:52:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:03.116 00:52:55 -- nvmf/common.sh@295 -- # net_devs=() 00:19:03.116 00:52:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:03.116 00:52:55 -- nvmf/common.sh@296 -- # e810=() 00:19:03.116 00:52:55 -- nvmf/common.sh@296 -- # local -ga e810 00:19:03.116 00:52:55 -- nvmf/common.sh@297 -- # x722=() 00:19:03.116 00:52:55 -- nvmf/common.sh@297 -- # local -ga x722 00:19:03.116 00:52:55 -- nvmf/common.sh@298 -- # mlx=() 00:19:03.116 00:52:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:03.116 00:52:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:03.116 00:52:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:03.116 00:52:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:03.116 00:52:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:03.116 00:52:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:03.116 00:52:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:03.116 00:52:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:03.116 00:52:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.116 00:52:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:03.116 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:03.116 00:52:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.116 00:52:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.116 00:52:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.116 00:52:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.116 00:52:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.116 00:52:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:03.116 00:52:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:03.116 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:03.116 00:52:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:03.116 00:52:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:03.117 00:52:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:03.117 00:52:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:03.117 00:52:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:03.117 00:52:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:03.117 00:52:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:03.117 00:52:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:03.117 00:52:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.117 00:52:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.117 00:52:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:03.117 00:52:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.117 00:52:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:03.117 Found net devices under 0000:86:00.0: cvl_0_0 00:19:03.117 00:52:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.117 00:52:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:03.117 00:52:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:03.117 00:52:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:03.117 00:52:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:03.117 00:52:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:03.117 Found net devices under 0000:86:00.1: cvl_0_1 00:19:03.117 00:52:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:03.117 00:52:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:03.117 00:52:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:03.117 00:52:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:03.117 00:52:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:03.117 00:52:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:03.117 00:52:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.117 00:52:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:03.117 00:52:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:03.117 00:52:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:03.117 00:52:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:03.117 00:52:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:03.117 00:52:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:03.117 00:52:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:03.117 00:52:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.117 00:52:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:03.117 00:52:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:03.117 00:52:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:03.117 00:52:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:03.378 00:52:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:03.378 00:52:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:03.378 00:52:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:03.378 00:52:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:03.378 00:52:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:03.378 00:52:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:03.378 00:52:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:03.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:19:03.378 00:19:03.378 --- 10.0.0.2 ping statistics --- 00:19:03.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.378 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:03.378 00:52:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:03.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:19:03.378 00:19:03.378 --- 10.0.0.1 ping statistics --- 00:19:03.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.378 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:19:03.378 00:52:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.378 00:52:55 -- nvmf/common.sh@411 -- # return 0 00:19:03.378 00:52:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:03.378 00:52:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.378 00:52:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:03.378 00:52:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:03.378 00:52:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.378 00:52:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:03.378 00:52:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:03.378 00:52:55 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:03.378 00:52:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:03.378 00:52:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:03.378 00:52:55 -- common/autotest_common.sh@10 -- # set +x 00:19:03.378 00:52:56 -- nvmf/common.sh@470 -- # nvmfpid=1729715 00:19:03.378 00:52:56 -- nvmf/common.sh@471 -- # waitforlisten 1729715 00:19:03.378 00:52:56 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:03.378 00:52:56 -- common/autotest_common.sh@817 -- # '[' -z 1729715 ']' 00:19:03.378 00:52:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.378 00:52:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:03.378 00:52:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.378 00:52:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:03.378 00:52:56 -- common/autotest_common.sh@10 -- # set +x 00:19:03.378 [2024-04-27 00:52:56.048842] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:03.378 [2024-04-27 00:52:56.048887] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.639 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.639 [2024-04-27 00:52:56.106345] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:03.639 [2024-04-27 00:52:56.184436] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.639 [2024-04-27 00:52:56.184474] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.639 [2024-04-27 00:52:56.184481] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.639 [2024-04-27 00:52:56.184488] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.639 [2024-04-27 00:52:56.184492] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.639 [2024-04-27 00:52:56.184594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.639 [2024-04-27 00:52:56.184678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:03.639 [2024-04-27 00:52:56.184786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.639 [2024-04-27 00:52:56.184787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:04.208 00:52:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:04.208 00:52:56 -- common/autotest_common.sh@850 -- # return 0 00:19:04.208 00:52:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:04.208 00:52:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:04.208 00:52:56 -- common/autotest_common.sh@10 -- # set +x 00:19:04.208 00:52:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.208 00:52:56 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:04.208 00:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.208 00:52:56 -- common/autotest_common.sh@10 -- # set +x 00:19:04.208 [2024-04-27 00:52:56.903916] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.469 00:52:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.469 00:52:56 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:04.469 00:52:56 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:04.469 00:52:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:04.469 00:52:56 -- common/autotest_common.sh@10 -- # set +x 00:19:04.469 00:52:56 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:04.469 00:52:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:04.469 00:52:56 -- target/shutdown.sh@28 -- # cat 00:19:04.469 00:52:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:04.469 00:52:56 -- target/shutdown.sh@28 -- # cat 00:19:04.469 00:52:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:04.469 00:52:56 -- target/shutdown.sh@28 -- # cat 00:19:04.469 00:52:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:04.469 00:52:56 -- target/shutdown.sh@28 -- # cat 00:19:04.469 00:52:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:04.469 00:52:56 -- target/shutdown.sh@28 -- # cat 00:19:04.469 00:52:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:04.469 00:52:56 -- target/shutdown.sh@28 -- # cat 00:19:04.469 00:52:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:04.469 00:52:56 -- target/shutdown.sh@28 -- # cat 00:19:04.469 00:52:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:04.469 00:52:56 -- target/shutdown.sh@28 -- # cat 00:19:04.469 00:52:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:04.469 00:52:56 -- target/shutdown.sh@28 -- # cat 00:19:04.469 00:52:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:04.469 00:52:56 -- target/shutdown.sh@28 -- # cat 00:19:04.469 00:52:56 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:04.469 00:52:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.469 00:52:56 -- common/autotest_common.sh@10 -- # set +x 00:19:04.469 Malloc1 00:19:04.469 [2024-04-27 00:52:56.999472] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.469 Malloc2 00:19:04.469 Malloc3 00:19:04.469 Malloc4 00:19:04.469 Malloc5 00:19:04.729 Malloc6 00:19:04.729 Malloc7 00:19:04.729 Malloc8 00:19:04.729 Malloc9 00:19:04.729 Malloc10 00:19:04.729 00:52:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.729 00:52:57 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:04.729 00:52:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:04.729 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:19:04.989 00:52:57 -- target/shutdown.sh@125 -- # perfpid=1730003 00:19:04.989 00:52:57 -- target/shutdown.sh@126 -- # waitforlisten 1730003 /var/tmp/bdevperf.sock 00:19:04.989 00:52:57 -- common/autotest_common.sh@817 -- # '[' -z 1730003 ']' 00:19:04.989 00:52:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.989 00:52:57 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:04.989 00:52:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:04.989 00:52:57 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:04.989 00:52:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.989 00:52:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:04.989 00:52:57 -- nvmf/common.sh@521 -- # config=() 00:19:04.989 00:52:57 -- common/autotest_common.sh@10 -- # set +x 00:19:04.989 00:52:57 -- nvmf/common.sh@521 -- # local subsystem config 00:19:04.989 00:52:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.989 00:52:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.989 { 00:19:04.989 "params": { 00:19:04.989 "name": "Nvme$subsystem", 00:19:04.989 "trtype": "$TEST_TRANSPORT", 00:19:04.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.989 "adrfam": "ipv4", 00:19:04.989 "trsvcid": "$NVMF_PORT", 00:19:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.989 "hdgst": ${hdgst:-false}, 00:19:04.989 "ddgst": ${ddgst:-false} 00:19:04.989 }, 00:19:04.989 "method": "bdev_nvme_attach_controller" 00:19:04.989 } 00:19:04.989 EOF 00:19:04.989 )") 00:19:04.989 00:52:57 -- nvmf/common.sh@543 -- # cat 00:19:04.989 00:52:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.989 00:52:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.989 { 00:19:04.989 "params": { 00:19:04.989 "name": "Nvme$subsystem", 00:19:04.989 "trtype": "$TEST_TRANSPORT", 00:19:04.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.989 "adrfam": "ipv4", 00:19:04.989 "trsvcid": "$NVMF_PORT", 00:19:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.989 "hdgst": ${hdgst:-false}, 00:19:04.989 "ddgst": ${ddgst:-false} 00:19:04.989 }, 00:19:04.989 "method": "bdev_nvme_attach_controller" 00:19:04.989 } 00:19:04.989 EOF 00:19:04.989 )") 00:19:04.989 00:52:57 -- nvmf/common.sh@543 -- # cat 00:19:04.989 00:52:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.989 00:52:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.989 { 00:19:04.989 "params": { 00:19:04.989 "name": "Nvme$subsystem", 00:19:04.989 "trtype": "$TEST_TRANSPORT", 00:19:04.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.989 "adrfam": "ipv4", 00:19:04.989 "trsvcid": "$NVMF_PORT", 00:19:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.989 "hdgst": ${hdgst:-false}, 00:19:04.989 "ddgst": ${ddgst:-false} 00:19:04.989 }, 00:19:04.989 "method": "bdev_nvme_attach_controller" 00:19:04.989 } 00:19:04.989 EOF 00:19:04.989 )") 00:19:04.989 00:52:57 -- nvmf/common.sh@543 -- # cat 00:19:04.989 00:52:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.989 00:52:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.989 { 00:19:04.989 "params": { 00:19:04.989 "name": "Nvme$subsystem", 00:19:04.989 "trtype": "$TEST_TRANSPORT", 00:19:04.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.989 "adrfam": "ipv4", 00:19:04.989 "trsvcid": "$NVMF_PORT", 00:19:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.989 "hdgst": ${hdgst:-false}, 00:19:04.989 "ddgst": ${ddgst:-false} 00:19:04.989 }, 00:19:04.989 "method": "bdev_nvme_attach_controller" 00:19:04.989 } 00:19:04.989 EOF 00:19:04.989 )") 00:19:04.989 00:52:57 -- nvmf/common.sh@543 -- # cat 00:19:04.989 00:52:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.989 00:52:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.989 { 00:19:04.989 "params": { 00:19:04.989 "name": "Nvme$subsystem", 00:19:04.989 "trtype": "$TEST_TRANSPORT", 00:19:04.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.989 "adrfam": "ipv4", 00:19:04.989 "trsvcid": "$NVMF_PORT", 00:19:04.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.989 "hdgst": ${hdgst:-false}, 00:19:04.989 "ddgst": ${ddgst:-false} 00:19:04.989 }, 00:19:04.990 "method": "bdev_nvme_attach_controller" 00:19:04.990 } 00:19:04.990 EOF 00:19:04.990 )") 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # cat 00:19:04.990 00:52:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.990 { 00:19:04.990 "params": { 00:19:04.990 "name": "Nvme$subsystem", 00:19:04.990 "trtype": "$TEST_TRANSPORT", 00:19:04.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.990 "adrfam": "ipv4", 00:19:04.990 "trsvcid": "$NVMF_PORT", 00:19:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.990 "hdgst": ${hdgst:-false}, 00:19:04.990 "ddgst": ${ddgst:-false} 00:19:04.990 }, 00:19:04.990 "method": "bdev_nvme_attach_controller" 00:19:04.990 } 00:19:04.990 EOF 00:19:04.990 )") 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # cat 00:19:04.990 00:52:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.990 { 00:19:04.990 "params": { 00:19:04.990 "name": "Nvme$subsystem", 00:19:04.990 "trtype": "$TEST_TRANSPORT", 00:19:04.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.990 "adrfam": "ipv4", 00:19:04.990 "trsvcid": "$NVMF_PORT", 00:19:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.990 "hdgst": ${hdgst:-false}, 00:19:04.990 "ddgst": ${ddgst:-false} 00:19:04.990 }, 00:19:04.990 "method": "bdev_nvme_attach_controller" 00:19:04.990 } 00:19:04.990 EOF 00:19:04.990 )") 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # cat 00:19:04.990 [2024-04-27 00:52:57.471453] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:04.990 [2024-04-27 00:52:57.471502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730003 ] 00:19:04.990 00:52:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.990 { 00:19:04.990 "params": { 00:19:04.990 "name": "Nvme$subsystem", 00:19:04.990 "trtype": "$TEST_TRANSPORT", 00:19:04.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.990 "adrfam": "ipv4", 00:19:04.990 "trsvcid": "$NVMF_PORT", 00:19:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.990 "hdgst": ${hdgst:-false}, 00:19:04.990 "ddgst": ${ddgst:-false} 00:19:04.990 }, 00:19:04.990 "method": "bdev_nvme_attach_controller" 00:19:04.990 } 00:19:04.990 EOF 00:19:04.990 )") 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # cat 00:19:04.990 00:52:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.990 { 00:19:04.990 "params": { 00:19:04.990 "name": "Nvme$subsystem", 00:19:04.990 "trtype": "$TEST_TRANSPORT", 00:19:04.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.990 "adrfam": "ipv4", 00:19:04.990 "trsvcid": "$NVMF_PORT", 00:19:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.990 "hdgst": ${hdgst:-false}, 00:19:04.990 "ddgst": ${ddgst:-false} 00:19:04.990 }, 00:19:04.990 "method": "bdev_nvme_attach_controller" 00:19:04.990 } 00:19:04.990 EOF 00:19:04.990 )") 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # cat 00:19:04.990 00:52:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:04.990 { 00:19:04.990 "params": { 00:19:04.990 "name": "Nvme$subsystem", 00:19:04.990 "trtype": "$TEST_TRANSPORT", 00:19:04.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:04.990 "adrfam": "ipv4", 00:19:04.990 "trsvcid": "$NVMF_PORT", 00:19:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:04.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:04.990 "hdgst": ${hdgst:-false}, 00:19:04.990 "ddgst": ${ddgst:-false} 00:19:04.990 }, 00:19:04.990 "method": "bdev_nvme_attach_controller" 00:19:04.990 } 00:19:04.990 EOF 00:19:04.990 )") 00:19:04.990 00:52:57 -- nvmf/common.sh@543 -- # cat 00:19:04.990 00:52:57 -- nvmf/common.sh@545 -- # jq . 00:19:04.990 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.990 00:52:57 -- nvmf/common.sh@546 -- # IFS=, 00:19:04.990 00:52:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:04.990 "params": { 00:19:04.990 "name": "Nvme1", 00:19:04.990 "trtype": "tcp", 00:19:04.990 "traddr": "10.0.0.2", 00:19:04.990 "adrfam": "ipv4", 00:19:04.990 "trsvcid": "4420", 00:19:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.990 "hdgst": false, 00:19:04.990 "ddgst": false 00:19:04.990 }, 00:19:04.990 "method": "bdev_nvme_attach_controller" 00:19:04.990 },{ 00:19:04.990 "params": { 00:19:04.990 "name": "Nvme2", 00:19:04.990 "trtype": "tcp", 00:19:04.990 "traddr": "10.0.0.2", 00:19:04.990 "adrfam": "ipv4", 00:19:04.990 "trsvcid": "4420", 00:19:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:04.990 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:04.990 "hdgst": false, 00:19:04.990 "ddgst": false 00:19:04.990 }, 00:19:04.990 "method": "bdev_nvme_attach_controller" 00:19:04.990 },{ 00:19:04.990 "params": { 00:19:04.990 "name": "Nvme3", 00:19:04.990 "trtype": "tcp", 00:19:04.990 "traddr": "10.0.0.2", 00:19:04.990 "adrfam": "ipv4", 00:19:04.990 "trsvcid": "4420", 00:19:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:04.990 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:04.990 "hdgst": false, 00:19:04.990 "ddgst": false 00:19:04.990 }, 00:19:04.990 "method": "bdev_nvme_attach_controller" 00:19:04.990 },{ 00:19:04.990 "params": { 00:19:04.990 "name": "Nvme4", 00:19:04.990 "trtype": "tcp", 00:19:04.990 "traddr": "10.0.0.2", 00:19:04.990 "adrfam": "ipv4", 00:19:04.990 "trsvcid": "4420", 00:19:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:04.990 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:04.990 "hdgst": false, 00:19:04.990 "ddgst": false 00:19:04.990 }, 00:19:04.990 "method": "bdev_nvme_attach_controller" 00:19:04.990 },{ 00:19:04.990 "params": { 00:19:04.990 "name": "Nvme5", 00:19:04.990 "trtype": "tcp", 00:19:04.990 "traddr": "10.0.0.2", 00:19:04.990 "adrfam": "ipv4", 00:19:04.990 "trsvcid": "4420", 00:19:04.990 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:04.990 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:04.991 "hdgst": false, 00:19:04.991 "ddgst": false 00:19:04.991 }, 00:19:04.991 "method": "bdev_nvme_attach_controller" 00:19:04.991 },{ 00:19:04.991 "params": { 00:19:04.991 "name": "Nvme6", 00:19:04.991 "trtype": "tcp", 00:19:04.991 "traddr": "10.0.0.2", 00:19:04.991 "adrfam": "ipv4", 00:19:04.991 "trsvcid": "4420", 00:19:04.991 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:04.991 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:04.991 "hdgst": false, 00:19:04.991 "ddgst": false 00:19:04.991 }, 00:19:04.991 "method": "bdev_nvme_attach_controller" 00:19:04.991 },{ 00:19:04.991 "params": { 00:19:04.991 "name": "Nvme7", 00:19:04.991 "trtype": "tcp", 00:19:04.991 "traddr": "10.0.0.2", 00:19:04.991 "adrfam": "ipv4", 00:19:04.991 "trsvcid": "4420", 00:19:04.991 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:04.991 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:04.991 "hdgst": false, 00:19:04.991 "ddgst": false 00:19:04.991 }, 00:19:04.991 "method": "bdev_nvme_attach_controller" 00:19:04.991 },{ 00:19:04.991 "params": { 00:19:04.991 "name": "Nvme8", 00:19:04.991 "trtype": "tcp", 00:19:04.991 "traddr": "10.0.0.2", 00:19:04.991 "adrfam": "ipv4", 00:19:04.991 "trsvcid": "4420", 00:19:04.991 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:04.991 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:04.991 "hdgst": false, 00:19:04.991 "ddgst": false 00:19:04.991 }, 00:19:04.991 "method": "bdev_nvme_attach_controller" 00:19:04.991 },{ 00:19:04.991 "params": { 00:19:04.991 "name": "Nvme9", 00:19:04.991 "trtype": "tcp", 00:19:04.991 "traddr": "10.0.0.2", 00:19:04.991 "adrfam": "ipv4", 00:19:04.991 "trsvcid": "4420", 00:19:04.991 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:04.991 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:04.991 "hdgst": false, 00:19:04.991 "ddgst": false 00:19:04.991 }, 00:19:04.991 "method": "bdev_nvme_attach_controller" 00:19:04.991 },{ 00:19:04.991 "params": { 00:19:04.991 "name": "Nvme10", 00:19:04.991 "trtype": "tcp", 00:19:04.991 "traddr": "10.0.0.2", 00:19:04.991 "adrfam": "ipv4", 00:19:04.991 "trsvcid": "4420", 00:19:04.991 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:04.991 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:04.991 "hdgst": false, 00:19:04.991 "ddgst": false 00:19:04.991 }, 00:19:04.991 "method": "bdev_nvme_attach_controller" 00:19:04.991 }' 00:19:04.991 [2024-04-27 00:52:57.527638] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.991 [2024-04-27 00:52:57.598924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.898 Running I/O for 10 seconds... 00:19:06.898 00:52:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:06.898 00:52:59 -- common/autotest_common.sh@850 -- # return 0 00:19:06.898 00:52:59 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:06.898 00:52:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.898 00:52:59 -- common/autotest_common.sh@10 -- # set +x 00:19:06.898 00:52:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.898 00:52:59 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:06.898 00:52:59 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:06.898 00:52:59 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:06.898 00:52:59 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:06.898 00:52:59 -- target/shutdown.sh@57 -- # local ret=1 00:19:06.898 00:52:59 -- target/shutdown.sh@58 -- # local i 00:19:06.898 00:52:59 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:06.898 00:52:59 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:06.898 00:52:59 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:06.898 00:52:59 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:06.898 00:52:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:06.898 00:52:59 -- common/autotest_common.sh@10 -- # set +x 00:19:06.898 00:52:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:06.898 00:52:59 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:06.898 00:52:59 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:06.898 00:52:59 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:07.158 00:52:59 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:07.158 00:52:59 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:07.158 00:52:59 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:07.158 00:52:59 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:07.158 00:52:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.158 00:52:59 -- common/autotest_common.sh@10 -- # set +x 00:19:07.158 00:52:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.158 00:52:59 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:07.158 00:52:59 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:07.158 00:52:59 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:07.418 00:53:00 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:07.418 00:53:00 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:07.418 00:53:00 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:07.418 00:53:00 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:07.418 00:53:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.418 00:53:00 -- common/autotest_common.sh@10 -- # set +x 00:19:07.418 00:53:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.418 00:53:00 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:07.418 00:53:00 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:07.418 00:53:00 -- target/shutdown.sh@64 -- # ret=0 00:19:07.418 00:53:00 -- target/shutdown.sh@65 -- # break 00:19:07.418 00:53:00 -- target/shutdown.sh@69 -- # return 0 00:19:07.418 00:53:00 -- target/shutdown.sh@135 -- # killprocess 1729715 00:19:07.418 00:53:00 -- common/autotest_common.sh@936 -- # '[' -z 1729715 ']' 00:19:07.418 00:53:00 -- common/autotest_common.sh@940 -- # kill -0 1729715 00:19:07.418 00:53:00 -- common/autotest_common.sh@941 -- # uname 00:19:07.418 00:53:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:07.418 00:53:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1729715 00:19:07.418 00:53:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:07.418 00:53:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:07.418 00:53:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1729715' 00:19:07.418 killing process with pid 1729715 00:19:07.418 00:53:00 -- common/autotest_common.sh@955 -- # kill 1729715 00:19:07.418 00:53:00 -- common/autotest_common.sh@960 -- # wait 1729715 00:19:07.694 [2024-04-27 00:53:00.114060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.694 [2024-04-27 00:53:00.114240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114365] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114395] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.114503] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108410 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115879] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115905] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115955] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.115998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.116004] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.116010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.116015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.116021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.116027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.116033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.695 [2024-04-27 00:53:00.116040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116075] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116082] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116292] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.116299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ad20 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117376] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117382] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117412] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.696 [2024-04-27 00:53:00.117549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117574] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117640] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.117736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21088a0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.118934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108d30 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.118959] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2108d30 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119677] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.697 [2024-04-27 00:53:00.119778] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119802] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119829] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119863] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119913] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119944] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.119999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.120006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.697 [2024-04-27 00:53:00.120012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120036] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.698 [2024-04-27 00:53:00.120046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with [2024-04-27 00:53:00.120097] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.698 the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120133] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120151] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21091c0 is same with the state(5) to be set 00:19:07.698 [2024-04-27 00:53:00.120270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-04-27 00:53:00.120522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-04-27 00:53:00.120529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120876] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with [2024-04-27 00:53:00.120882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:1the state(5) to be set 00:19:07.699 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120894] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.699 [2024-04-27 00:53:00.120901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:1[2024-04-27 00:53:00.120901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 the state(5) to be set 00:19:07.699 [2024-04-27 00:53:00.120911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.699 [2024-04-27 00:53:00.120920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.699 [2024-04-27 00:53:00.120921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120926] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.699 [2024-04-27 00:53:00.120932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.699 [2024-04-27 00:53:00.120940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.699 [2024-04-27 00:53:00.120941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.699 [2024-04-27 00:53:00.120946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.699 [2024-04-27 00:53:00.120948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.699 [2024-04-27 00:53:00.120954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.120957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 [2024-04-27 00:53:00.120961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.120965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.700 [2024-04-27 00:53:00.120969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.120975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 [2024-04-27 00:53:00.120979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.120982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.700 [2024-04-27 00:53:00.120986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.120991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 [2024-04-27 00:53:00.120992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.120998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.700 [2024-04-27 00:53:00.121000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 [2024-04-27 00:53:00.121009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.700 [2024-04-27 00:53:00.121016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 [2024-04-27 00:53:00.121030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.700 [2024-04-27 00:53:00.121037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 [2024-04-27 00:53:00.121044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.700 [2024-04-27 00:53:00.121051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128[2024-04-27 00:53:00.121058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.700 [2024-04-27 00:53:00.121080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 [2024-04-27 00:53:00.121088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.700 [2024-04-27 00:53:00.121095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 [2024-04-27 00:53:00.121102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.700 [2024-04-27 00:53:00.121109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 [2024-04-27 00:53:00.121124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.700 [2024-04-27 00:53:00.121131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.700 [2024-04-27 00:53:00.121138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.700 [2024-04-27 00:53:00.121142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 [2024-04-27 00:53:00.121145] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:12[2024-04-27 00:53:00.121152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.701 the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-27 00:53:00.121161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.701 [2024-04-27 00:53:00.121176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 [2024-04-27 00:53:00.121184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.701 [2024-04-27 00:53:00.121191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 [2024-04-27 00:53:00.121199] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:12[2024-04-27 00:53:00.121206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.701 the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 [2024-04-27 00:53:00.121222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.701 [2024-04-27 00:53:00.121229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 [2024-04-27 00:53:00.121236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:12[2024-04-27 00:53:00.121243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.701 the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 [2024-04-27 00:53:00.121253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:12[2024-04-27 00:53:00.121260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.701 the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-27 00:53:00.121270] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.701 [2024-04-27 00:53:00.121286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 [2024-04-27 00:53:00.121293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.701 [2024-04-27 00:53:00.121299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 [2024-04-27 00:53:00.121307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.701 [2024-04-27 00:53:00.121321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.701 [2024-04-27 00:53:00.121328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such devi[2024-04-27 00:53:00.121352] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109650 is same with ce or address) on qpair id 1 00:19:07.701 the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.121406] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2121110 was disconnected and freed. reset controller. 00:19:07.701 [2024-04-27 00:53:00.122197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.701 [2024-04-27 00:53:00.122307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122572] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.122598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109ae0 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.123560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2109f70 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.702 [2024-04-27 00:53:00.124234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124278] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124504] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.124510] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a400 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125093] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125187] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.703 [2024-04-27 00:53:00.125255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.125261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.125267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.125273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.125279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.125285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.125291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.125297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.125303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.125309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.125315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.136481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f09fa0 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.136581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9600 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.136656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef5e10 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.136736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f031b0 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.136817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab3760 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.136893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.704 [2024-04-27 00:53:00.136948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20911c0 is same with the state(5) to be set 00:19:07.704 [2024-04-27 00:53:00.136970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.704 [2024-04-27 00:53:00.136978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.136986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.136992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.136999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.137005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.137013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.137020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.137027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef2390 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.137057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.137065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.137077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.137084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.137091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.137098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.137104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.137110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04790 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.137142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.137149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.137156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.137162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.137169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.137176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.705 [2024-04-27 00:53:00.137182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.137188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffb630 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137491] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137548] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.137613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210a890 is same with the state(5) to be set 00:19:07.705 [2024-04-27 00:53:00.138151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.705 [2024-04-27 00:53:00.138167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.138179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.705 [2024-04-27 00:53:00.138187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.138195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.705 [2024-04-27 00:53:00.138202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.705 [2024-04-27 00:53:00.138210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.705 [2024-04-27 00:53:00.138220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.706 [2024-04-27 00:53:00.138693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.706 [2024-04-27 00:53:00.138701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.138991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.138997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139180] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2122650 was disconnected and freed. reset controller. 00:19:07.707 [2024-04-27 00:53:00.139347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.707 [2024-04-27 00:53:00.139396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.707 [2024-04-27 00:53:00.139404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.708 [2024-04-27 00:53:00.139916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.708 [2024-04-27 00:53:00.139923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.139931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.139937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.139947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.139954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.139961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.139968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.139976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.139982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.139991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.139997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.140006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.140012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.140020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.140027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.140035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.140042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.140049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.140056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.140065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.140076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.140085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.140092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.140100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.140107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.140115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.140122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.140130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.140138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.709 [2024-04-27 00:53:00.145443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.145505] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2126460 was disconnected and freed. reset controller. 00:19:07.709 [2024-04-27 00:53:00.145629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:07.709 [2024-04-27 00:53:00.145655] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec9600 (9): Bad file descriptor 00:19:07.709 [2024-04-27 00:53:00.147662] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:07.709 [2024-04-27 00:53:00.147690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef5e10 (9): Bad file descriptor 00:19:07.709 [2024-04-27 00:53:00.147709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f09fa0 (9): Bad file descriptor 00:19:07.709 [2024-04-27 00:53:00.147724] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f031b0 (9): Bad file descriptor 00:19:07.709 [2024-04-27 00:53:00.147739] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab3760 (9): Bad file descriptor 00:19:07.709 [2024-04-27 00:53:00.147752] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20911c0 (9): Bad file descriptor 00:19:07.709 [2024-04-27 00:53:00.147766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef2390 (9): Bad file descriptor 00:19:07.709 [2024-04-27 00:53:00.147779] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f04790 (9): Bad file descriptor 00:19:07.709 [2024-04-27 00:53:00.147795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffb630 (9): Bad file descriptor 00:19:07.709 [2024-04-27 00:53:00.147825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.709 [2024-04-27 00:53:00.147835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.709 [2024-04-27 00:53:00.147843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.710 [2024-04-27 00:53:00.147849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.147857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.710 [2024-04-27 00:53:00.147864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.147871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.710 [2024-04-27 00:53:00.147878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.147885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03de0 is same with the state(5) to be set 00:19:07.710 [2024-04-27 00:53:00.148243] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.710 [2024-04-27 00:53:00.148373] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:07.710 [2024-04-27 00:53:00.148848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.710 [2024-04-27 00:53:00.149280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.710 [2024-04-27 00:53:00.149294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec9600 with addr=10.0.0.2, port=4420 00:19:07.710 [2024-04-27 00:53:00.149303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9600 is same with the state(5) to be set 00:19:07.710 [2024-04-27 00:53:00.149641] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.710 [2024-04-27 00:53:00.149957] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.710 [2024-04-27 00:53:00.150456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.710 [2024-04-27 00:53:00.150790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.710 [2024-04-27 00:53:00.150802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef5e10 with addr=10.0.0.2, port=4420 00:19:07.710 [2024-04-27 00:53:00.150811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef5e10 is same with the state(5) to be set 00:19:07.710 [2024-04-27 00:53:00.150992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.710 [2024-04-27 00:53:00.151303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.710 [2024-04-27 00:53:00.151314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f09fa0 with addr=10.0.0.2, port=4420 00:19:07.710 [2024-04-27 00:53:00.151321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f09fa0 is same with the state(5) to be set 00:19:07.710 [2024-04-27 00:53:00.151332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec9600 (9): Bad file descriptor 00:19:07.710 [2024-04-27 00:53:00.151449] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.710 [2024-04-27 00:53:00.151471] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef5e10 (9): Bad file descriptor 00:19:07.710 [2024-04-27 00:53:00.151482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f09fa0 (9): Bad file descriptor 00:19:07.710 [2024-04-27 00:53:00.151492] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:07.710 [2024-04-27 00:53:00.151499] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:07.710 [2024-04-27 00:53:00.151508] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:07.710 [2024-04-27 00:53:00.151568] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.710 [2024-04-27 00:53:00.151578] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:07.710 [2024-04-27 00:53:00.151585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:07.710 [2024-04-27 00:53:00.151592] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:07.710 [2024-04-27 00:53:00.151604] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:07.710 [2024-04-27 00:53:00.151611] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:07.710 [2024-04-27 00:53:00.151618] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:07.710 [2024-04-27 00:53:00.151656] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.710 [2024-04-27 00:53:00.151664] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.710 [2024-04-27 00:53:00.157723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f03de0 (9): Bad file descriptor 00:19:07.710 [2024-04-27 00:53:00.157838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.157851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.157865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.157873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.157883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.157895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.157906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.157914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.157923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.157931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.157941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.157949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.157958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.157966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.157975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.157983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.157992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.158000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.158010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.158017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.158026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.158034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.158043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.158051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.158061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.158068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.710 [2024-04-27 00:53:00.158083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.710 [2024-04-27 00:53:00.158091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.711 [2024-04-27 00:53:00.158536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.711 [2024-04-27 00:53:00.158547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.158918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.158926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f3b60 is same with the state(5) to be set 00:19:07.712 [2024-04-27 00:53:00.159940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.159956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.159967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.159974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.159982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.159989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.159997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.160003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.160011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.160018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.160026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.160033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.160041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.160047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.160055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.160061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.712 [2024-04-27 00:53:00.160074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.712 [2024-04-27 00:53:00.160080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.713 [2024-04-27 00:53:00.160536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.713 [2024-04-27 00:53:00.160544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.160887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.160894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211fcb0 is same with the state(5) to be set 00:19:07.714 [2024-04-27 00:53:00.161902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.161916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.161926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.161933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.161942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.161948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.161956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.161963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.161971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.161978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.161985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.161992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.162001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.162007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.162015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.162022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.162030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.714 [2024-04-27 00:53:00.162038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.714 [2024-04-27 00:53:00.162046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.715 [2024-04-27 00:53:00.162530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.715 [2024-04-27 00:53:00.162538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.162856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.162864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123b00 is same with the state(5) to be set 00:19:07.716 [2024-04-27 00:53:00.163880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.163894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.163904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.163911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.163919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.163926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.716 [2024-04-27 00:53:00.163934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.716 [2024-04-27 00:53:00.163941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.163949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.163956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.163964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.163970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.163978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.163986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.163994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.717 [2024-04-27 00:53:00.164453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.717 [2024-04-27 00:53:00.164459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.164820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.164826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2124fb0 is same with the state(5) to be set 00:19:07.718 [2024-04-27 00:53:00.165831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.165843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.165853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.165860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.165868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.165875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.165883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.165889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.165897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.165903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.165911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.165917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.718 [2024-04-27 00:53:00.165925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.718 [2024-04-27 00:53:00.165932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.165940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.165947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.165957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.165964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.165972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.165978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.165986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.165993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.719 [2024-04-27 00:53:00.166419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.719 [2024-04-27 00:53:00.166427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.166764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.166773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2127910 is same with the state(5) to be set 00:19:07.720 [2024-04-27 00:53:00.167782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.167794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.167804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.167811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.167819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.167826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.167835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.167841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.167849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.167856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.167864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.167871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.720 [2024-04-27 00:53:00.167881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.720 [2024-04-27 00:53:00.167888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.167896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.167902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.167915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.167921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.167929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.167936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.167944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.167951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.167959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.167966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.167974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.167980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.167989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.167995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.721 [2024-04-27 00:53:00.168373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.721 [2024-04-27 00:53:00.168380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.722 [2024-04-27 00:53:00.168734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.722 [2024-04-27 00:53:00.168741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec2bb0 is same with the state(5) to be set 00:19:07.722 [2024-04-27 00:53:00.172803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.722 [2024-04-27 00:53:00.172827] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:07.722 [2024-04-27 00:53:00.172836] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:07.722 [2024-04-27 00:53:00.172894] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.722 [2024-04-27 00:53:00.172909] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.722 [2024-04-27 00:53:00.172919] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.722 [2024-04-27 00:53:00.172934] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.722 [2024-04-27 00:53:00.173003] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:07.722 [2024-04-27 00:53:00.173013] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:07.722 [2024-04-27 00:53:00.173021] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:07.722 [2024-04-27 00:53:00.173033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:07.722 [2024-04-27 00:53:00.173525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.722 [2024-04-27 00:53:00.173982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.722 [2024-04-27 00:53:00.173994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab3760 with addr=10.0.0.2, port=4420 00:19:07.722 [2024-04-27 00:53:00.174003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab3760 is same with the state(5) to be set 00:19:07.722 [2024-04-27 00:53:00.174372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.722 [2024-04-27 00:53:00.174693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.174703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20911c0 with addr=10.0.0.2, port=4420 00:19:07.723 [2024-04-27 00:53:00.174710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20911c0 is same with the state(5) to be set 00:19:07.723 [2024-04-27 00:53:00.175143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.175495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.175506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef2390 with addr=10.0.0.2, port=4420 00:19:07.723 [2024-04-27 00:53:00.175512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef2390 is same with the state(5) to be set 00:19:07.723 [2024-04-27 00:53:00.176862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:07.723 [2024-04-27 00:53:00.176878] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:07.723 [2024-04-27 00:53:00.177360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.177759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.177769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f031b0 with addr=10.0.0.2, port=4420 00:19:07.723 [2024-04-27 00:53:00.177777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f031b0 is same with the state(5) to be set 00:19:07.723 [2024-04-27 00:53:00.178166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.178552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.178562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffb630 with addr=10.0.0.2, port=4420 00:19:07.723 [2024-04-27 00:53:00.178569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffb630 is same with the state(5) to be set 00:19:07.723 [2024-04-27 00:53:00.178935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.179350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.179362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f04790 with addr=10.0.0.2, port=4420 00:19:07.723 [2024-04-27 00:53:00.179371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f04790 is same with the state(5) to be set 00:19:07.723 [2024-04-27 00:53:00.179807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.180141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.723 [2024-04-27 00:53:00.180155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ec9600 with addr=10.0.0.2, port=4420 00:19:07.723 [2024-04-27 00:53:00.180163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec9600 is same with the state(5) to be set 00:19:07.723 [2024-04-27 00:53:00.180178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab3760 (9): Bad file descriptor 00:19:07.723 [2024-04-27 00:53:00.180194] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20911c0 (9): Bad file descriptor 00:19:07.723 [2024-04-27 00:53:00.180206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef2390 (9): Bad file descriptor 00:19:07.723 [2024-04-27 00:53:00.180305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.723 [2024-04-27 00:53:00.180694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.723 [2024-04-27 00:53:00.180702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.180989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.180998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.724 [2024-04-27 00:53:00.181317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.724 [2024-04-27 00:53:00.181328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.725 [2024-04-27 00:53:00.181593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.725 [2024-04-27 00:53:00.181603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1740 is same with the state(5) to be set 00:19:07.725 task offset: 35712 on job bdev=Nvme3n1 fails 00:19:07.725 00:19:07.725 Latency(us) 00:19:07.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.725 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.725 Job: Nvme1n1 ended in about 0.91 seconds with error 00:19:07.725 Verification LBA range: start 0x0 length 0x400 00:19:07.725 Nvme1n1 : 0.91 140.68 8.79 70.34 0.00 300332.67 22339.23 255305.46 00:19:07.725 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.725 Job: Nvme2n1 ended in about 0.91 seconds with error 00:19:07.725 Verification LBA range: start 0x0 length 0x400 00:19:07.725 Nvme2n1 : 0.91 140.38 8.77 70.19 0.00 295705.38 22453.20 242540.19 00:19:07.725 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.725 Job: Nvme3n1 ended in about 0.89 seconds with error 00:19:07.725 Verification LBA range: start 0x0 length 0x400 00:19:07.725 Nvme3n1 : 0.89 288.23 18.01 72.06 0.00 169448.49 17096.35 200597.15 00:19:07.725 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.725 Job: Nvme4n1 ended in about 0.90 seconds with error 00:19:07.725 Verification LBA range: start 0x0 length 0x400 00:19:07.725 Nvme4n1 : 0.90 283.29 17.71 71.38 0.00 169026.61 10086.85 214274.23 00:19:07.725 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.725 Job: Nvme5n1 ended in about 0.91 seconds with error 00:19:07.725 Verification LBA range: start 0x0 length 0x400 00:19:07.725 Nvme5n1 : 0.91 210.12 13.13 70.04 0.00 210343.62 22453.20 226127.69 00:19:07.725 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.725 Job: Nvme6n1 ended in about 0.92 seconds with error 00:19:07.725 Verification LBA range: start 0x0 length 0x400 00:19:07.725 Nvme6n1 : 0.92 209.67 13.10 69.89 0.00 206865.36 23023.08 222480.47 00:19:07.725 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.725 Job: Nvme7n1 ended in about 0.90 seconds with error 00:19:07.725 Verification LBA range: start 0x0 length 0x400 00:19:07.725 Nvme7n1 : 0.90 213.90 13.37 71.30 0.00 198383.64 10542.75 225215.89 00:19:07.725 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.725 Job: Nvme8n1 ended in about 0.92 seconds with error 00:19:07.725 Verification LBA range: start 0x0 length 0x400 00:19:07.725 Nvme8n1 : 0.92 139.48 8.72 69.74 0.00 266084.47 39663.53 284483.23 00:19:07.725 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.725 Job: Nvme9n1 ended in about 0.93 seconds with error 00:19:07.725 Verification LBA range: start 0x0 length 0x400 00:19:07.725 Nvme9n1 : 0.93 137.26 8.58 68.63 0.00 265728.30 38523.77 255305.46 00:19:07.725 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.725 Job: Nvme10n1 ended in about 0.92 seconds with error 00:19:07.725 Verification LBA range: start 0x0 length 0x400 00:19:07.725 Nvme10n1 : 0.92 139.18 8.70 69.59 0.00 256217.56 22339.23 260776.29 00:19:07.725 =================================================================================================================== 00:19:07.725 Total : 1902.18 118.89 703.16 0.00 224559.10 10086.85 284483.23 00:19:07.725 [2024-04-27 00:53:00.207173] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:07.725 [2024-04-27 00:53:00.207216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:07.725 [2024-04-27 00:53:00.207653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.725 [2024-04-27 00:53:00.207984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.725 [2024-04-27 00:53:00.207994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f09fa0 with addr=10.0.0.2, port=4420 00:19:07.725 [2024-04-27 00:53:00.208004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f09fa0 is same with the state(5) to be set 00:19:07.725 [2024-04-27 00:53:00.208421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.725 [2024-04-27 00:53:00.208737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.725 [2024-04-27 00:53:00.208747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef5e10 with addr=10.0.0.2, port=4420 00:19:07.725 [2024-04-27 00:53:00.208755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef5e10 is same with the state(5) to be set 00:19:07.725 [2024-04-27 00:53:00.208767] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f031b0 (9): Bad file descriptor 00:19:07.725 [2024-04-27 00:53:00.208778] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffb630 (9): Bad file descriptor 00:19:07.725 [2024-04-27 00:53:00.208786] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f04790 (9): Bad file descriptor 00:19:07.725 [2024-04-27 00:53:00.208795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec9600 (9): Bad file descriptor 00:19:07.725 [2024-04-27 00:53:00.208803] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:07.725 [2024-04-27 00:53:00.208810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:07.725 [2024-04-27 00:53:00.208818] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:07.726 [2024-04-27 00:53:00.208832] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:07.726 [2024-04-27 00:53:00.208838] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:07.726 [2024-04-27 00:53:00.208844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:07.726 [2024-04-27 00:53:00.208854] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:07.726 [2024-04-27 00:53:00.208860] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:07.726 [2024-04-27 00:53:00.208866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:07.726 [2024-04-27 00:53:00.208901] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.208913] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.208923] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.208939] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.208949] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.208958] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.208973] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.209063] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.726 [2024-04-27 00:53:00.209075] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.726 [2024-04-27 00:53:00.209081] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.726 [2024-04-27 00:53:00.209419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.726 [2024-04-27 00:53:00.209754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.726 [2024-04-27 00:53:00.209764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f03de0 with addr=10.0.0.2, port=4420 00:19:07.726 [2024-04-27 00:53:00.209772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f03de0 is same with the state(5) to be set 00:19:07.726 [2024-04-27 00:53:00.209783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f09fa0 (9): Bad file descriptor 00:19:07.726 [2024-04-27 00:53:00.209792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef5e10 (9): Bad file descriptor 00:19:07.726 [2024-04-27 00:53:00.209799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:19:07.726 [2024-04-27 00:53:00.209805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:19:07.726 [2024-04-27 00:53:00.209812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:07.726 [2024-04-27 00:53:00.209821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:19:07.726 [2024-04-27 00:53:00.209827] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:19:07.726 [2024-04-27 00:53:00.209833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:07.726 [2024-04-27 00:53:00.209842] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:19:07.726 [2024-04-27 00:53:00.209848] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:19:07.726 [2024-04-27 00:53:00.209854] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:07.726 [2024-04-27 00:53:00.209864] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:19:07.726 [2024-04-27 00:53:00.209871] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:19:07.726 [2024-04-27 00:53:00.209877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:07.726 [2024-04-27 00:53:00.209888] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.209897] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.209925] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.209934] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.209944] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.209953] bdev_nvme.c:2878:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:07.726 [2024-04-27 00:53:00.210216] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.726 [2024-04-27 00:53:00.210224] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.726 [2024-04-27 00:53:00.210230] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.726 [2024-04-27 00:53:00.210238] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.726 [2024-04-27 00:53:00.210253] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f03de0 (9): Bad file descriptor 00:19:07.726 [2024-04-27 00:53:00.210262] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:19:07.726 [2024-04-27 00:53:00.210267] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:19:07.726 [2024-04-27 00:53:00.210274] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:07.726 [2024-04-27 00:53:00.210282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:19:07.726 [2024-04-27 00:53:00.210288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:19:07.726 [2024-04-27 00:53:00.210294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:07.726 [2024-04-27 00:53:00.210579] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:07.726 [2024-04-27 00:53:00.210593] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:07.726 [2024-04-27 00:53:00.210601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:07.726 [2024-04-27 00:53:00.210609] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.726 [2024-04-27 00:53:00.210614] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.726 [2024-04-27 00:53:00.210635] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:19:07.726 [2024-04-27 00:53:00.210642] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:19:07.726 [2024-04-27 00:53:00.210649] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:07.726 [2024-04-27 00:53:00.210683] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.726 [2024-04-27 00:53:00.211143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.726 [2024-04-27 00:53:00.211690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.726 [2024-04-27 00:53:00.211704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef2390 with addr=10.0.0.2, port=4420 00:19:07.726 [2024-04-27 00:53:00.211713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef2390 is same with the state(5) to be set 00:19:07.726 [2024-04-27 00:53:00.212200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.726 [2024-04-27 00:53:00.212574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.726 [2024-04-27 00:53:00.212584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20911c0 with addr=10.0.0.2, port=4420 00:19:07.726 [2024-04-27 00:53:00.212591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20911c0 is same with the state(5) to be set 00:19:07.726 [2024-04-27 00:53:00.212908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.726 [2024-04-27 00:53:00.213306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.726 [2024-04-27 00:53:00.213317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ab3760 with addr=10.0.0.2, port=4420 00:19:07.726 [2024-04-27 00:53:00.213324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ab3760 is same with the state(5) to be set 00:19:07.726 [2024-04-27 00:53:00.213357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef2390 (9): Bad file descriptor 00:19:07.726 [2024-04-27 00:53:00.213369] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20911c0 (9): Bad file descriptor 00:19:07.726 [2024-04-27 00:53:00.213380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ab3760 (9): Bad file descriptor 00:19:07.726 [2024-04-27 00:53:00.213414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:19:07.726 [2024-04-27 00:53:00.213422] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:19:07.727 [2024-04-27 00:53:00.213429] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:07.727 [2024-04-27 00:53:00.213438] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:07.727 [2024-04-27 00:53:00.213444] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:19:07.727 [2024-04-27 00:53:00.213450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:07.727 [2024-04-27 00:53:00.213457] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:07.727 [2024-04-27 00:53:00.213463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:07.727 [2024-04-27 00:53:00.213470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:07.727 [2024-04-27 00:53:00.213495] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.727 [2024-04-27 00:53:00.213502] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.727 [2024-04-27 00:53:00.213507] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:07.987 00:53:00 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:07.987 00:53:00 -- target/shutdown.sh@139 -- # sleep 1 00:19:08.926 00:53:01 -- target/shutdown.sh@142 -- # kill -9 1730003 00:19:08.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1730003) - No such process 00:19:08.926 00:53:01 -- target/shutdown.sh@142 -- # true 00:19:08.926 00:53:01 -- target/shutdown.sh@144 -- # stoptarget 00:19:08.926 00:53:01 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:08.926 00:53:01 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:08.926 00:53:01 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:08.926 00:53:01 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:08.926 00:53:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:08.926 00:53:01 -- nvmf/common.sh@117 -- # sync 00:19:08.926 00:53:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:08.926 00:53:01 -- nvmf/common.sh@120 -- # set +e 00:19:08.926 00:53:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:08.926 00:53:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:08.926 rmmod nvme_tcp 00:19:08.926 rmmod nvme_fabrics 00:19:09.187 rmmod nvme_keyring 00:19:09.187 00:53:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.187 00:53:01 -- nvmf/common.sh@124 -- # set -e 00:19:09.187 00:53:01 -- nvmf/common.sh@125 -- # return 0 00:19:09.187 00:53:01 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:09.187 00:53:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:09.187 00:53:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:09.187 00:53:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:09.187 00:53:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.187 00:53:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:09.187 00:53:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.187 00:53:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.187 00:53:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.094 00:53:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:11.094 00:19:11.094 real 0m8.036s 00:19:11.094 user 0m20.169s 00:19:11.094 sys 0m1.377s 00:19:11.094 00:53:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.094 00:53:03 -- common/autotest_common.sh@10 -- # set +x 00:19:11.094 ************************************ 00:19:11.094 END TEST nvmf_shutdown_tc3 00:19:11.094 ************************************ 00:19:11.094 00:53:03 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:11.094 00:19:11.094 real 0m31.709s 00:19:11.094 user 1m20.127s 00:19:11.094 sys 0m8.432s 00:19:11.094 00:53:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:11.094 00:53:03 -- common/autotest_common.sh@10 -- # set +x 00:19:11.094 ************************************ 00:19:11.094 END TEST nvmf_shutdown 00:19:11.094 ************************************ 00:19:11.355 00:53:03 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:19:11.355 00:53:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:11.355 00:53:03 -- common/autotest_common.sh@10 -- # set +x 00:19:11.355 00:53:03 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:19:11.355 00:53:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:11.355 00:53:03 -- common/autotest_common.sh@10 -- # set +x 00:19:11.355 00:53:03 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:19:11.355 00:53:03 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:11.355 00:53:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:11.355 00:53:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:11.355 00:53:03 -- common/autotest_common.sh@10 -- # set +x 00:19:11.355 ************************************ 00:19:11.355 START TEST nvmf_multicontroller 00:19:11.355 ************************************ 00:19:11.355 00:53:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:11.708 * Looking for test storage... 00:19:11.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:11.708 00:53:04 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.708 00:53:04 -- nvmf/common.sh@7 -- # uname -s 00:19:11.708 00:53:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.708 00:53:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.708 00:53:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.708 00:53:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.708 00:53:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.708 00:53:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.708 00:53:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.708 00:53:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.708 00:53:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.708 00:53:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.708 00:53:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:11.708 00:53:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:11.708 00:53:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.708 00:53:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.708 00:53:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.708 00:53:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.708 00:53:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:11.708 00:53:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.708 00:53:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.708 00:53:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.708 00:53:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.708 00:53:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.708 00:53:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.708 00:53:04 -- paths/export.sh@5 -- # export PATH 00:19:11.708 00:53:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.708 00:53:04 -- nvmf/common.sh@47 -- # : 0 00:19:11.708 00:53:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:11.708 00:53:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:11.708 00:53:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.708 00:53:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.708 00:53:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.709 00:53:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:11.709 00:53:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:11.709 00:53:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:11.709 00:53:04 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.709 00:53:04 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.709 00:53:04 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:11.709 00:53:04 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:11.709 00:53:04 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:11.709 00:53:04 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:11.709 00:53:04 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:11.709 00:53:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:11.709 00:53:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.709 00:53:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:11.709 00:53:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:11.709 00:53:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:11.709 00:53:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.709 00:53:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.709 00:53:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.709 00:53:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:11.709 00:53:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:11.709 00:53:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:11.709 00:53:04 -- common/autotest_common.sh@10 -- # set +x 00:19:16.990 00:53:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:16.990 00:53:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:16.990 00:53:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:16.990 00:53:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:16.990 00:53:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:16.990 00:53:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:16.990 00:53:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:16.990 00:53:09 -- nvmf/common.sh@295 -- # net_devs=() 00:19:16.990 00:53:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:16.990 00:53:09 -- nvmf/common.sh@296 -- # e810=() 00:19:16.990 00:53:09 -- nvmf/common.sh@296 -- # local -ga e810 00:19:16.990 00:53:09 -- nvmf/common.sh@297 -- # x722=() 00:19:16.990 00:53:09 -- nvmf/common.sh@297 -- # local -ga x722 00:19:16.990 00:53:09 -- nvmf/common.sh@298 -- # mlx=() 00:19:16.990 00:53:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:16.990 00:53:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:16.990 00:53:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:16.990 00:53:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:16.990 00:53:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:16.990 00:53:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.990 00:53:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:16.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:16.990 00:53:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:16.990 00:53:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:16.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:16.990 00:53:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:16.990 00:53:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.990 00:53:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.990 00:53:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:16.990 00:53:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.990 00:53:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:16.990 Found net devices under 0000:86:00.0: cvl_0_0 00:19:16.990 00:53:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.990 00:53:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:16.990 00:53:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:16.990 00:53:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:16.990 00:53:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:16.990 00:53:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:16.990 Found net devices under 0000:86:00.1: cvl_0_1 00:19:16.990 00:53:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:16.990 00:53:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:16.990 00:53:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:16.990 00:53:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:16.990 00:53:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:16.990 00:53:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.990 00:53:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:16.990 00:53:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:16.990 00:53:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:16.990 00:53:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:16.990 00:53:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:16.990 00:53:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:16.990 00:53:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:16.990 00:53:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.991 00:53:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:16.991 00:53:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:16.991 00:53:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:16.991 00:53:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:16.991 00:53:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:16.991 00:53:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:16.991 00:53:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:16.991 00:53:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:17.250 00:53:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:17.250 00:53:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:17.250 00:53:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:17.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:19:17.250 00:19:17.250 --- 10.0.0.2 ping statistics --- 00:19:17.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.250 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:19:17.250 00:53:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:17.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:19:17.250 00:19:17.250 --- 10.0.0.1 ping statistics --- 00:19:17.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.250 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:19:17.250 00:53:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.250 00:53:09 -- nvmf/common.sh@411 -- # return 0 00:19:17.250 00:53:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:17.250 00:53:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.250 00:53:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:17.250 00:53:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:17.250 00:53:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.250 00:53:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:17.250 00:53:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:17.250 00:53:09 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:17.250 00:53:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:17.250 00:53:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:17.250 00:53:09 -- common/autotest_common.sh@10 -- # set +x 00:19:17.250 00:53:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:17.250 00:53:09 -- nvmf/common.sh@470 -- # nvmfpid=1734108 00:19:17.250 00:53:09 -- nvmf/common.sh@471 -- # waitforlisten 1734108 00:19:17.250 00:53:09 -- common/autotest_common.sh@817 -- # '[' -z 1734108 ']' 00:19:17.250 00:53:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.250 00:53:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:17.250 00:53:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.250 00:53:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:17.250 00:53:09 -- common/autotest_common.sh@10 -- # set +x 00:19:17.250 [2024-04-27 00:53:09.819222] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:17.250 [2024-04-27 00:53:09.819270] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.250 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.250 [2024-04-27 00:53:09.877177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:17.509 [2024-04-27 00:53:09.962159] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.509 [2024-04-27 00:53:09.962192] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.509 [2024-04-27 00:53:09.962200] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.509 [2024-04-27 00:53:09.962205] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.509 [2024-04-27 00:53:09.962223] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.509 [2024-04-27 00:53:09.962320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.509 [2024-04-27 00:53:09.962405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:17.509 [2024-04-27 00:53:09.962407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.078 00:53:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:18.078 00:53:10 -- common/autotest_common.sh@850 -- # return 0 00:19:18.078 00:53:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:18.078 00:53:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:18.078 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.078 00:53:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.078 00:53:10 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:18.078 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.078 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.078 [2024-04-27 00:53:10.684693] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.078 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.078 00:53:10 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:18.078 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.078 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.078 Malloc0 00:19:18.078 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.078 00:53:10 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:18.078 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.078 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.078 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.078 00:53:10 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:18.078 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.078 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.078 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.078 00:53:10 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:18.078 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.078 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.078 [2024-04-27 00:53:10.738662] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.078 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.078 00:53:10 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:18.078 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.078 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.078 [2024-04-27 00:53:10.746586] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:18.078 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.078 00:53:10 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:18.078 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.078 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.078 Malloc1 00:19:18.078 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.078 00:53:10 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:18.078 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.078 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.337 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.337 00:53:10 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:18.337 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.338 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.338 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.338 00:53:10 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:18.338 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.338 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.338 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.338 00:53:10 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:18.338 00:53:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:18.338 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:18.338 00:53:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:18.338 00:53:10 -- host/multicontroller.sh@44 -- # bdevperf_pid=1734349 00:19:18.338 00:53:10 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.338 00:53:10 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:18.338 00:53:10 -- host/multicontroller.sh@47 -- # waitforlisten 1734349 /var/tmp/bdevperf.sock 00:19:18.338 00:53:10 -- common/autotest_common.sh@817 -- # '[' -z 1734349 ']' 00:19:18.338 00:53:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.338 00:53:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:18.338 00:53:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.338 00:53:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:18.338 00:53:10 -- common/autotest_common.sh@10 -- # set +x 00:19:19.276 00:53:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:19.276 00:53:11 -- common/autotest_common.sh@850 -- # return 0 00:19:19.276 00:53:11 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:19.276 00:53:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.276 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:19:19.276 NVMe0n1 00:19:19.276 00:53:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.276 00:53:11 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:19.276 00:53:11 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:19.276 00:53:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.276 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:19:19.276 00:53:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.276 1 00:19:19.276 00:53:11 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:19.276 00:53:11 -- common/autotest_common.sh@638 -- # local es=0 00:19:19.276 00:53:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:19.276 00:53:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:19.276 00:53:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.276 00:53:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:19.276 00:53:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.276 00:53:11 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:19.276 00:53:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.276 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:19:19.276 request: 00:19:19.276 { 00:19:19.276 "name": "NVMe0", 00:19:19.276 "trtype": "tcp", 00:19:19.276 "traddr": "10.0.0.2", 00:19:19.276 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:19.276 "hostaddr": "10.0.0.2", 00:19:19.276 "hostsvcid": "60000", 00:19:19.276 "adrfam": "ipv4", 00:19:19.276 "trsvcid": "4420", 00:19:19.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.276 "method": "bdev_nvme_attach_controller", 00:19:19.276 "req_id": 1 00:19:19.276 } 00:19:19.276 Got JSON-RPC error response 00:19:19.276 response: 00:19:19.276 { 00:19:19.276 "code": -114, 00:19:19.276 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:19.276 } 00:19:19.276 00:53:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:19.276 00:53:11 -- common/autotest_common.sh@641 -- # es=1 00:19:19.276 00:53:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.276 00:53:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.276 00:53:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.276 00:53:11 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:19.276 00:53:11 -- common/autotest_common.sh@638 -- # local es=0 00:19:19.276 00:53:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:19.276 00:53:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:19.276 00:53:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.276 00:53:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:19.276 00:53:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.276 00:53:11 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:19.276 00:53:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.276 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:19:19.276 request: 00:19:19.276 { 00:19:19.276 "name": "NVMe0", 00:19:19.276 "trtype": "tcp", 00:19:19.276 "traddr": "10.0.0.2", 00:19:19.276 "hostaddr": "10.0.0.2", 00:19:19.276 "hostsvcid": "60000", 00:19:19.276 "adrfam": "ipv4", 00:19:19.276 "trsvcid": "4420", 00:19:19.276 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:19.276 "method": "bdev_nvme_attach_controller", 00:19:19.276 "req_id": 1 00:19:19.276 } 00:19:19.276 Got JSON-RPC error response 00:19:19.276 response: 00:19:19.276 { 00:19:19.276 "code": -114, 00:19:19.277 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:19.277 } 00:19:19.277 00:53:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:19.277 00:53:11 -- common/autotest_common.sh@641 -- # es=1 00:19:19.277 00:53:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.277 00:53:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.277 00:53:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.277 00:53:11 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:19.277 00:53:11 -- common/autotest_common.sh@638 -- # local es=0 00:19:19.277 00:53:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:19.277 00:53:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:19.277 00:53:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.277 00:53:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:19.277 00:53:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.277 00:53:11 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:19.277 00:53:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.277 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:19:19.277 request: 00:19:19.277 { 00:19:19.277 "name": "NVMe0", 00:19:19.277 "trtype": "tcp", 00:19:19.277 "traddr": "10.0.0.2", 00:19:19.277 "hostaddr": "10.0.0.2", 00:19:19.277 "hostsvcid": "60000", 00:19:19.277 "adrfam": "ipv4", 00:19:19.277 "trsvcid": "4420", 00:19:19.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.277 "multipath": "disable", 00:19:19.277 "method": "bdev_nvme_attach_controller", 00:19:19.277 "req_id": 1 00:19:19.277 } 00:19:19.277 Got JSON-RPC error response 00:19:19.277 response: 00:19:19.277 { 00:19:19.277 "code": -114, 00:19:19.277 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:19:19.277 } 00:19:19.277 00:53:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:19.277 00:53:11 -- common/autotest_common.sh@641 -- # es=1 00:19:19.277 00:53:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.277 00:53:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.277 00:53:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.277 00:53:11 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:19.277 00:53:11 -- common/autotest_common.sh@638 -- # local es=0 00:19:19.277 00:53:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:19.277 00:53:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:19:19.277 00:53:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.277 00:53:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:19:19.277 00:53:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:19.277 00:53:11 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:19.277 00:53:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.277 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:19:19.277 request: 00:19:19.277 { 00:19:19.277 "name": "NVMe0", 00:19:19.277 "trtype": "tcp", 00:19:19.277 "traddr": "10.0.0.2", 00:19:19.277 "hostaddr": "10.0.0.2", 00:19:19.277 "hostsvcid": "60000", 00:19:19.277 "adrfam": "ipv4", 00:19:19.277 "trsvcid": "4420", 00:19:19.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.277 "multipath": "failover", 00:19:19.277 "method": "bdev_nvme_attach_controller", 00:19:19.277 "req_id": 1 00:19:19.277 } 00:19:19.277 Got JSON-RPC error response 00:19:19.277 response: 00:19:19.277 { 00:19:19.277 "code": -114, 00:19:19.277 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:19:19.277 } 00:19:19.277 00:53:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:19:19.277 00:53:11 -- common/autotest_common.sh@641 -- # es=1 00:19:19.277 00:53:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:19.277 00:53:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:19.277 00:53:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:19.277 00:53:11 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:19.277 00:53:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.277 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:19:19.536 00:19:19.536 00:53:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.536 00:53:12 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:19.536 00:53:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.536 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:19:19.536 00:53:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.536 00:53:12 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:19.536 00:53:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.536 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:19:19.536 00:19:19.536 00:53:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.536 00:53:12 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:19.536 00:53:12 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:19.536 00:53:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.536 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:19:19.536 00:53:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.536 00:53:12 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:19.536 00:53:12 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.915 0 00:19:20.915 00:53:13 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:20.915 00:53:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.915 00:53:13 -- common/autotest_common.sh@10 -- # set +x 00:19:20.915 00:53:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.915 00:53:13 -- host/multicontroller.sh@100 -- # killprocess 1734349 00:19:20.915 00:53:13 -- common/autotest_common.sh@936 -- # '[' -z 1734349 ']' 00:19:20.915 00:53:13 -- common/autotest_common.sh@940 -- # kill -0 1734349 00:19:20.915 00:53:13 -- common/autotest_common.sh@941 -- # uname 00:19:20.915 00:53:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:20.915 00:53:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1734349 00:19:20.915 00:53:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:20.915 00:53:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:20.915 00:53:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1734349' 00:19:20.915 killing process with pid 1734349 00:19:20.915 00:53:13 -- common/autotest_common.sh@955 -- # kill 1734349 00:19:20.915 00:53:13 -- common/autotest_common.sh@960 -- # wait 1734349 00:19:20.915 00:53:13 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:20.915 00:53:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.915 00:53:13 -- common/autotest_common.sh@10 -- # set +x 00:19:20.915 00:53:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.915 00:53:13 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:20.915 00:53:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:20.915 00:53:13 -- common/autotest_common.sh@10 -- # set +x 00:19:20.915 00:53:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:20.915 00:53:13 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:20.915 00:53:13 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:20.915 00:53:13 -- common/autotest_common.sh@1598 -- # read -r file 00:19:20.915 00:53:13 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:19:20.915 00:53:13 -- common/autotest_common.sh@1597 -- # sort -u 00:19:20.915 00:53:13 -- common/autotest_common.sh@1599 -- # cat 00:19:20.915 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:20.915 [2024-04-27 00:53:10.844863] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:20.915 [2024-04-27 00:53:10.844912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734349 ] 00:19:20.915 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.915 [2024-04-27 00:53:10.897747] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.915 [2024-04-27 00:53:10.976311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.915 [2024-04-27 00:53:12.144077] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 0b928525-0a0a-4c17-bfa2-7b9c5c8f7589 already exists 00:19:20.915 [2024-04-27 00:53:12.144106] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:0b928525-0a0a-4c17-bfa2-7b9c5c8f7589 alias for bdev NVMe1n1 00:19:20.915 [2024-04-27 00:53:12.144115] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:20.915 Running I/O for 1 seconds... 00:19:20.915 00:19:20.915 Latency(us) 00:19:20.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.915 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:20.915 NVMe0n1 : 1.00 22658.47 88.51 0.00 0.00 5635.50 3362.28 23023.08 00:19:20.915 =================================================================================================================== 00:19:20.915 Total : 22658.47 88.51 0.00 0.00 5635.50 3362.28 23023.08 00:19:20.915 Received shutdown signal, test time was about 1.000000 seconds 00:19:20.915 00:19:20.915 Latency(us) 00:19:20.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.915 =================================================================================================================== 00:19:20.915 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.915 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:20.915 00:53:13 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:20.915 00:53:13 -- common/autotest_common.sh@1598 -- # read -r file 00:19:20.915 00:53:13 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:20.915 00:53:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:20.915 00:53:13 -- nvmf/common.sh@117 -- # sync 00:19:20.915 00:53:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.915 00:53:13 -- nvmf/common.sh@120 -- # set +e 00:19:20.915 00:53:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.915 00:53:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.915 rmmod nvme_tcp 00:19:21.174 rmmod nvme_fabrics 00:19:21.174 rmmod nvme_keyring 00:19:21.174 00:53:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.174 00:53:13 -- nvmf/common.sh@124 -- # set -e 00:19:21.174 00:53:13 -- nvmf/common.sh@125 -- # return 0 00:19:21.174 00:53:13 -- nvmf/common.sh@478 -- # '[' -n 1734108 ']' 00:19:21.174 00:53:13 -- nvmf/common.sh@479 -- # killprocess 1734108 00:19:21.174 00:53:13 -- common/autotest_common.sh@936 -- # '[' -z 1734108 ']' 00:19:21.174 00:53:13 -- common/autotest_common.sh@940 -- # kill -0 1734108 00:19:21.174 00:53:13 -- common/autotest_common.sh@941 -- # uname 00:19:21.174 00:53:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:21.174 00:53:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1734108 00:19:21.174 00:53:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:21.174 00:53:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:21.174 00:53:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1734108' 00:19:21.174 killing process with pid 1734108 00:19:21.174 00:53:13 -- common/autotest_common.sh@955 -- # kill 1734108 00:19:21.174 00:53:13 -- common/autotest_common.sh@960 -- # wait 1734108 00:19:21.433 00:53:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:21.433 00:53:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:21.433 00:53:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:21.433 00:53:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.433 00:53:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:21.433 00:53:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.433 00:53:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.433 00:53:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.342 00:53:16 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:23.342 00:19:23.342 real 0m12.005s 00:19:23.342 user 0m16.447s 00:19:23.342 sys 0m5.017s 00:19:23.342 00:53:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:23.342 00:53:16 -- common/autotest_common.sh@10 -- # set +x 00:19:23.342 ************************************ 00:19:23.342 END TEST nvmf_multicontroller 00:19:23.342 ************************************ 00:19:23.602 00:53:16 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:23.602 00:53:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:23.602 00:53:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:23.602 00:53:16 -- common/autotest_common.sh@10 -- # set +x 00:19:23.602 ************************************ 00:19:23.602 START TEST nvmf_aer 00:19:23.602 ************************************ 00:19:23.602 00:53:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:23.602 * Looking for test storage... 00:19:23.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:23.602 00:53:16 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.602 00:53:16 -- nvmf/common.sh@7 -- # uname -s 00:19:23.602 00:53:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.602 00:53:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.602 00:53:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.602 00:53:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.602 00:53:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.602 00:53:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.602 00:53:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.602 00:53:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.602 00:53:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.602 00:53:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.602 00:53:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:23.602 00:53:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:23.602 00:53:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.602 00:53:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.602 00:53:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.602 00:53:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.602 00:53:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.602 00:53:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.602 00:53:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.602 00:53:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.602 00:53:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.602 00:53:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.602 00:53:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.602 00:53:16 -- paths/export.sh@5 -- # export PATH 00:19:23.602 00:53:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.602 00:53:16 -- nvmf/common.sh@47 -- # : 0 00:19:23.602 00:53:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:23.602 00:53:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:23.602 00:53:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.602 00:53:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.602 00:53:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.603 00:53:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:23.603 00:53:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:23.603 00:53:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:23.603 00:53:16 -- host/aer.sh@11 -- # nvmftestinit 00:19:23.603 00:53:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:23.603 00:53:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.603 00:53:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:23.603 00:53:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:23.603 00:53:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:23.603 00:53:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.603 00:53:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.603 00:53:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.603 00:53:16 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:23.603 00:53:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:23.603 00:53:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:23.603 00:53:16 -- common/autotest_common.sh@10 -- # set +x 00:19:28.993 00:53:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:28.993 00:53:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.993 00:53:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.993 00:53:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.993 00:53:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.993 00:53:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.993 00:53:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.993 00:53:21 -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.993 00:53:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.993 00:53:21 -- nvmf/common.sh@296 -- # e810=() 00:19:28.993 00:53:21 -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.993 00:53:21 -- nvmf/common.sh@297 -- # x722=() 00:19:28.993 00:53:21 -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.993 00:53:21 -- nvmf/common.sh@298 -- # mlx=() 00:19:28.993 00:53:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.993 00:53:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.993 00:53:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.993 00:53:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:28.993 00:53:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.993 00:53:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.993 00:53:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:28.993 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:28.993 00:53:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.993 00:53:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:28.993 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:28.993 00:53:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.993 00:53:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.993 00:53:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.993 00:53:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:28.993 00:53:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.993 00:53:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:28.993 Found net devices under 0000:86:00.0: cvl_0_0 00:19:28.993 00:53:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.993 00:53:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.993 00:53:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.993 00:53:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:28.993 00:53:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.993 00:53:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:28.993 Found net devices under 0000:86:00.1: cvl_0_1 00:19:28.993 00:53:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.993 00:53:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:28.993 00:53:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:28.993 00:53:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:28.993 00:53:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:28.993 00:53:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.993 00:53:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.993 00:53:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.993 00:53:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:28.993 00:53:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.993 00:53:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.993 00:53:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:28.993 00:53:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.993 00:53:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.993 00:53:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:28.993 00:53:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:28.993 00:53:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.994 00:53:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.994 00:53:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.994 00:53:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.994 00:53:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:28.994 00:53:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.994 00:53:21 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.994 00:53:21 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.994 00:53:21 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:28.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:19:28.994 00:19:28.994 --- 10.0.0.2 ping statistics --- 00:19:28.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.994 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:19:28.994 00:53:21 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.424 ms 00:19:28.994 00:19:28.994 --- 10.0.0.1 ping statistics --- 00:19:28.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.994 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:19:28.994 00:53:21 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.994 00:53:21 -- nvmf/common.sh@411 -- # return 0 00:19:28.994 00:53:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:28.994 00:53:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.994 00:53:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:28.994 00:53:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:28.994 00:53:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.994 00:53:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:28.994 00:53:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:28.994 00:53:21 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:28.994 00:53:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:28.994 00:53:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:28.994 00:53:21 -- common/autotest_common.sh@10 -- # set +x 00:19:28.994 00:53:21 -- nvmf/common.sh@470 -- # nvmfpid=1738341 00:19:28.994 00:53:21 -- nvmf/common.sh@471 -- # waitforlisten 1738341 00:19:28.994 00:53:21 -- common/autotest_common.sh@817 -- # '[' -z 1738341 ']' 00:19:28.994 00:53:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.994 00:53:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:28.994 00:53:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.994 00:53:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:28.994 00:53:21 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:28.994 00:53:21 -- common/autotest_common.sh@10 -- # set +x 00:19:28.994 [2024-04-27 00:53:21.584019] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:28.994 [2024-04-27 00:53:21.584061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.994 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.994 [2024-04-27 00:53:21.641286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.254 [2024-04-27 00:53:21.727212] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.254 [2024-04-27 00:53:21.727247] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.254 [2024-04-27 00:53:21.727254] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.254 [2024-04-27 00:53:21.727260] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.254 [2024-04-27 00:53:21.727267] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.254 [2024-04-27 00:53:21.727313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.254 [2024-04-27 00:53:21.727323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.254 [2024-04-27 00:53:21.727345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.254 [2024-04-27 00:53:21.727347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.823 00:53:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:29.823 00:53:22 -- common/autotest_common.sh@850 -- # return 0 00:19:29.823 00:53:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:29.823 00:53:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:29.823 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:29.823 00:53:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.823 00:53:22 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:29.823 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.823 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:29.823 [2024-04-27 00:53:22.425786] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.823 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.823 00:53:22 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:29.823 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.823 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:29.823 Malloc0 00:19:29.823 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.823 00:53:22 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:29.823 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.823 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:29.823 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.823 00:53:22 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:29.823 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.823 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:29.823 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.823 00:53:22 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.823 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.823 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:29.823 [2024-04-27 00:53:22.477569] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.823 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.823 00:53:22 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:29.823 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.823 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:29.823 [2024-04-27 00:53:22.485377] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:29.823 [ 00:19:29.823 { 00:19:29.823 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:29.823 "subtype": "Discovery", 00:19:29.823 "listen_addresses": [], 00:19:29.823 "allow_any_host": true, 00:19:29.823 "hosts": [] 00:19:29.823 }, 00:19:29.823 { 00:19:29.823 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.823 "subtype": "NVMe", 00:19:29.823 "listen_addresses": [ 00:19:29.823 { 00:19:29.823 "transport": "TCP", 00:19:29.823 "trtype": "TCP", 00:19:29.823 "adrfam": "IPv4", 00:19:29.823 "traddr": "10.0.0.2", 00:19:29.823 "trsvcid": "4420" 00:19:29.823 } 00:19:29.823 ], 00:19:29.823 "allow_any_host": true, 00:19:29.823 "hosts": [], 00:19:29.823 "serial_number": "SPDK00000000000001", 00:19:29.823 "model_number": "SPDK bdev Controller", 00:19:29.823 "max_namespaces": 2, 00:19:29.823 "min_cntlid": 1, 00:19:29.823 "max_cntlid": 65519, 00:19:29.823 "namespaces": [ 00:19:29.823 { 00:19:29.823 "nsid": 1, 00:19:29.823 "bdev_name": "Malloc0", 00:19:29.823 "name": "Malloc0", 00:19:29.823 "nguid": "AD6166DEA1E348558F15311B994CD1F9", 00:19:29.823 "uuid": "ad6166de-a1e3-4855-8f15-311b994cd1f9" 00:19:29.823 } 00:19:29.823 ] 00:19:29.823 } 00:19:29.823 ] 00:19:29.823 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:29.823 00:53:22 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:29.823 00:53:22 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:29.823 00:53:22 -- host/aer.sh@33 -- # aerpid=1738392 00:19:29.823 00:53:22 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:29.823 00:53:22 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:29.823 00:53:22 -- common/autotest_common.sh@1251 -- # local i=0 00:19:29.823 00:53:22 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:29.823 00:53:22 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:19:29.823 00:53:22 -- common/autotest_common.sh@1254 -- # i=1 00:19:29.823 00:53:22 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:19:30.083 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.083 00:53:22 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:30.083 00:53:22 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:19:30.083 00:53:22 -- common/autotest_common.sh@1254 -- # i=2 00:19:30.083 00:53:22 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:19:30.083 00:53:22 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:30.083 00:53:22 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:30.083 00:53:22 -- common/autotest_common.sh@1262 -- # return 0 00:19:30.083 00:53:22 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:30.083 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.083 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:30.083 Malloc1 00:19:30.083 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.083 00:53:22 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:30.083 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.083 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:30.083 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.083 00:53:22 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:30.083 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.083 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:30.083 [ 00:19:30.083 { 00:19:30.083 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:30.083 "subtype": "Discovery", 00:19:30.083 "listen_addresses": [], 00:19:30.083 "allow_any_host": true, 00:19:30.083 "hosts": [] 00:19:30.083 }, 00:19:30.083 { 00:19:30.083 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.083 "subtype": "NVMe", 00:19:30.083 "listen_addresses": [ 00:19:30.083 { 00:19:30.083 "transport": "TCP", 00:19:30.083 "trtype": "TCP", 00:19:30.083 "adrfam": "IPv4", 00:19:30.083 "traddr": "10.0.0.2", 00:19:30.083 "trsvcid": "4420" 00:19:30.083 } 00:19:30.343 ], 00:19:30.343 "allow_any_host": true, 00:19:30.343 "hosts": [], 00:19:30.343 "serial_number": "SPDK00000000000001", 00:19:30.343 "model_number": "SPDK bdev Controller", 00:19:30.343 "max_namespaces": 2, 00:19:30.343 "min_cntlid": 1, 00:19:30.343 "max_cntlid": 65519, 00:19:30.343 "namespaces": [ 00:19:30.343 { 00:19:30.343 "nsid": 1, 00:19:30.343 "bdev_name": "Malloc0", 00:19:30.343 "name": "Malloc0", 00:19:30.343 "nguid": "AD6166DEA1E348558F15311B994CD1F9", 00:19:30.343 "uuid": "ad6166de-a1e3-4855-8f15-311b994cd1f9" 00:19:30.343 }, 00:19:30.343 { 00:19:30.343 "nsid": 2, 00:19:30.343 Asynchronous Event Request test 00:19:30.343 Attaching to 10.0.0.2 00:19:30.343 Attached to 10.0.0.2 00:19:30.343 Registering asynchronous event callbacks... 00:19:30.343 Starting namespace attribute notice tests for all controllers... 00:19:30.343 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:30.343 aer_cb - Changed Namespace 00:19:30.343 Cleaning up... 00:19:30.343 "bdev_name": "Malloc1", 00:19:30.343 "name": "Malloc1", 00:19:30.343 "nguid": "3287BD4CEBBE47C68A5ACFFE1AAD9BB6", 00:19:30.343 "uuid": "3287bd4c-ebbe-47c6-8a5a-cffe1aad9bb6" 00:19:30.343 } 00:19:30.343 ] 00:19:30.343 } 00:19:30.343 ] 00:19:30.343 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.343 00:53:22 -- host/aer.sh@43 -- # wait 1738392 00:19:30.343 00:53:22 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:30.343 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.343 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:30.343 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.343 00:53:22 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:30.343 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.343 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:30.343 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.343 00:53:22 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.343 00:53:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.343 00:53:22 -- common/autotest_common.sh@10 -- # set +x 00:19:30.343 00:53:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.343 00:53:22 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:30.343 00:53:22 -- host/aer.sh@51 -- # nvmftestfini 00:19:30.343 00:53:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:30.343 00:53:22 -- nvmf/common.sh@117 -- # sync 00:19:30.343 00:53:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:30.343 00:53:22 -- nvmf/common.sh@120 -- # set +e 00:19:30.343 00:53:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:30.343 00:53:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:30.343 rmmod nvme_tcp 00:19:30.343 rmmod nvme_fabrics 00:19:30.343 rmmod nvme_keyring 00:19:30.343 00:53:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:30.343 00:53:22 -- nvmf/common.sh@124 -- # set -e 00:19:30.343 00:53:22 -- nvmf/common.sh@125 -- # return 0 00:19:30.343 00:53:22 -- nvmf/common.sh@478 -- # '[' -n 1738341 ']' 00:19:30.343 00:53:22 -- nvmf/common.sh@479 -- # killprocess 1738341 00:19:30.343 00:53:22 -- common/autotest_common.sh@936 -- # '[' -z 1738341 ']' 00:19:30.343 00:53:22 -- common/autotest_common.sh@940 -- # kill -0 1738341 00:19:30.343 00:53:22 -- common/autotest_common.sh@941 -- # uname 00:19:30.343 00:53:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:30.343 00:53:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1738341 00:19:30.343 00:53:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:30.343 00:53:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:30.343 00:53:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1738341' 00:19:30.343 killing process with pid 1738341 00:19:30.343 00:53:22 -- common/autotest_common.sh@955 -- # kill 1738341 00:19:30.343 [2024-04-27 00:53:22.943301] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:30.343 00:53:22 -- common/autotest_common.sh@960 -- # wait 1738341 00:19:30.603 00:53:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:30.603 00:53:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:30.603 00:53:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:30.603 00:53:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.603 00:53:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.603 00:53:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.603 00:53:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.603 00:53:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.510 00:53:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:32.768 00:19:32.768 real 0m9.049s 00:19:32.768 user 0m7.064s 00:19:32.768 sys 0m4.408s 00:19:32.768 00:53:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:32.768 00:53:25 -- common/autotest_common.sh@10 -- # set +x 00:19:32.768 ************************************ 00:19:32.768 END TEST nvmf_aer 00:19:32.768 ************************************ 00:19:32.768 00:53:25 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:32.769 00:53:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:32.769 00:53:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:32.769 00:53:25 -- common/autotest_common.sh@10 -- # set +x 00:19:32.769 ************************************ 00:19:32.769 START TEST nvmf_async_init 00:19:32.769 ************************************ 00:19:32.769 00:53:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:32.769 * Looking for test storage... 00:19:33.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:33.028 00:53:25 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.028 00:53:25 -- nvmf/common.sh@7 -- # uname -s 00:19:33.028 00:53:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.028 00:53:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.028 00:53:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.028 00:53:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.028 00:53:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.028 00:53:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.028 00:53:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.028 00:53:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.028 00:53:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.028 00:53:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.028 00:53:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.028 00:53:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.028 00:53:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.028 00:53:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.028 00:53:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.028 00:53:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.028 00:53:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.028 00:53:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.028 00:53:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.028 00:53:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.028 00:53:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.028 00:53:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.028 00:53:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.028 00:53:25 -- paths/export.sh@5 -- # export PATH 00:19:33.028 00:53:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.028 00:53:25 -- nvmf/common.sh@47 -- # : 0 00:19:33.028 00:53:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:33.028 00:53:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:33.028 00:53:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.028 00:53:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.028 00:53:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.028 00:53:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:33.028 00:53:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:33.028 00:53:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:33.028 00:53:25 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:33.028 00:53:25 -- host/async_init.sh@14 -- # null_block_size=512 00:19:33.028 00:53:25 -- host/async_init.sh@15 -- # null_bdev=null0 00:19:33.028 00:53:25 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:33.028 00:53:25 -- host/async_init.sh@20 -- # uuidgen 00:19:33.028 00:53:25 -- host/async_init.sh@20 -- # tr -d - 00:19:33.028 00:53:25 -- host/async_init.sh@20 -- # nguid=8d017932cd2b4b2693a6f2def20f2cd4 00:19:33.028 00:53:25 -- host/async_init.sh@22 -- # nvmftestinit 00:19:33.028 00:53:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:33.028 00:53:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.028 00:53:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:33.028 00:53:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:33.028 00:53:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:33.028 00:53:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.028 00:53:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.028 00:53:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.028 00:53:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:33.029 00:53:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:33.029 00:53:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.029 00:53:25 -- common/autotest_common.sh@10 -- # set +x 00:19:38.303 00:53:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:38.303 00:53:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:38.303 00:53:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:38.303 00:53:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:38.303 00:53:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:38.303 00:53:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:38.303 00:53:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:38.303 00:53:30 -- nvmf/common.sh@295 -- # net_devs=() 00:19:38.303 00:53:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:38.303 00:53:30 -- nvmf/common.sh@296 -- # e810=() 00:19:38.303 00:53:30 -- nvmf/common.sh@296 -- # local -ga e810 00:19:38.303 00:53:30 -- nvmf/common.sh@297 -- # x722=() 00:19:38.303 00:53:30 -- nvmf/common.sh@297 -- # local -ga x722 00:19:38.303 00:53:30 -- nvmf/common.sh@298 -- # mlx=() 00:19:38.303 00:53:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:38.303 00:53:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.303 00:53:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:38.303 00:53:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:38.303 00:53:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:38.303 00:53:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:38.303 00:53:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:38.303 00:53:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:38.303 00:53:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.303 00:53:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:38.303 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:38.303 00:53:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.303 00:53:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.303 00:53:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.303 00:53:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.303 00:53:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.303 00:53:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.303 00:53:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:38.303 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:38.303 00:53:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.303 00:53:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.303 00:53:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.304 00:53:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.304 00:53:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.304 00:53:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:38.304 00:53:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:38.304 00:53:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:38.304 00:53:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.304 00:53:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.304 00:53:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:38.304 00:53:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.304 00:53:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:38.304 Found net devices under 0000:86:00.0: cvl_0_0 00:19:38.304 00:53:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.304 00:53:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.304 00:53:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.304 00:53:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:38.304 00:53:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.304 00:53:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:38.304 Found net devices under 0000:86:00.1: cvl_0_1 00:19:38.304 00:53:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.304 00:53:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:38.304 00:53:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:38.304 00:53:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:38.304 00:53:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:38.304 00:53:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:38.304 00:53:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.304 00:53:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.304 00:53:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.304 00:53:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:38.304 00:53:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.304 00:53:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.304 00:53:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:38.304 00:53:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.304 00:53:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.304 00:53:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:38.304 00:53:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:38.304 00:53:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.304 00:53:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.304 00:53:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.304 00:53:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.304 00:53:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:38.304 00:53:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.304 00:53:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.304 00:53:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.304 00:53:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:38.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:19:38.304 00:19:38.304 --- 10.0.0.2 ping statistics --- 00:19:38.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.304 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:38.304 00:53:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:19:38.304 00:19:38.304 --- 10.0.0.1 ping statistics --- 00:19:38.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.304 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:19:38.304 00:53:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.304 00:53:30 -- nvmf/common.sh@411 -- # return 0 00:19:38.304 00:53:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:38.304 00:53:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.304 00:53:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:38.304 00:53:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:38.304 00:53:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.304 00:53:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:38.304 00:53:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:38.304 00:53:30 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:38.304 00:53:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:38.304 00:53:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:38.304 00:53:30 -- common/autotest_common.sh@10 -- # set +x 00:19:38.304 00:53:30 -- nvmf/common.sh@470 -- # nvmfpid=1741912 00:19:38.304 00:53:30 -- nvmf/common.sh@471 -- # waitforlisten 1741912 00:19:38.304 00:53:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:38.304 00:53:30 -- common/autotest_common.sh@817 -- # '[' -z 1741912 ']' 00:19:38.304 00:53:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.304 00:53:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:38.304 00:53:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.304 00:53:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:38.304 00:53:30 -- common/autotest_common.sh@10 -- # set +x 00:19:38.304 [2024-04-27 00:53:30.863450] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:38.304 [2024-04-27 00:53:30.863491] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.304 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.304 [2024-04-27 00:53:30.917999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.304 [2024-04-27 00:53:30.994739] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.304 [2024-04-27 00:53:30.994775] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.304 [2024-04-27 00:53:30.994782] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.304 [2024-04-27 00:53:30.994788] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.304 [2024-04-27 00:53:30.994793] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.304 [2024-04-27 00:53:30.994809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.240 00:53:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:39.240 00:53:31 -- common/autotest_common.sh@850 -- # return 0 00:19:39.240 00:53:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:39.240 00:53:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:39.240 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:39.240 00:53:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.240 00:53:31 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:19:39.240 00:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.240 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:39.240 [2024-04-27 00:53:31.705615] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.240 00:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.240 00:53:31 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:39.240 00:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.240 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:39.240 null0 00:19:39.240 00:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.240 00:53:31 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:39.240 00:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.240 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:39.240 00:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.240 00:53:31 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:39.240 00:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.240 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:39.240 00:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.240 00:53:31 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8d017932cd2b4b2693a6f2def20f2cd4 00:19:39.240 00:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.240 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:39.240 00:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.240 00:53:31 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:39.240 00:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.240 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:39.240 [2024-04-27 00:53:31.745809] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.240 00:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.240 00:53:31 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:39.240 00:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.240 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:39.499 nvme0n1 00:19:39.499 00:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.499 00:53:31 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:39.499 00:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.499 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:39.499 [ 00:19:39.499 { 00:19:39.499 "name": "nvme0n1", 00:19:39.499 "aliases": [ 00:19:39.499 "8d017932-cd2b-4b26-93a6-f2def20f2cd4" 00:19:39.499 ], 00:19:39.499 "product_name": "NVMe disk", 00:19:39.499 "block_size": 512, 00:19:39.499 "num_blocks": 2097152, 00:19:39.499 "uuid": "8d017932-cd2b-4b26-93a6-f2def20f2cd4", 00:19:39.499 "assigned_rate_limits": { 00:19:39.499 "rw_ios_per_sec": 0, 00:19:39.499 "rw_mbytes_per_sec": 0, 00:19:39.499 "r_mbytes_per_sec": 0, 00:19:39.499 "w_mbytes_per_sec": 0 00:19:39.499 }, 00:19:39.499 "claimed": false, 00:19:39.499 "zoned": false, 00:19:39.499 "supported_io_types": { 00:19:39.499 "read": true, 00:19:39.499 "write": true, 00:19:39.499 "unmap": false, 00:19:39.499 "write_zeroes": true, 00:19:39.499 "flush": true, 00:19:39.499 "reset": true, 00:19:39.499 "compare": true, 00:19:39.499 "compare_and_write": true, 00:19:39.499 "abort": true, 00:19:39.499 "nvme_admin": true, 00:19:39.499 "nvme_io": true 00:19:39.499 }, 00:19:39.499 "memory_domains": [ 00:19:39.499 { 00:19:39.499 "dma_device_id": "system", 00:19:39.499 "dma_device_type": 1 00:19:39.499 } 00:19:39.499 ], 00:19:39.499 "driver_specific": { 00:19:39.499 "nvme": [ 00:19:39.499 { 00:19:39.499 "trid": { 00:19:39.499 "trtype": "TCP", 00:19:39.499 "adrfam": "IPv4", 00:19:39.499 "traddr": "10.0.0.2", 00:19:39.499 "trsvcid": "4420", 00:19:39.499 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:39.499 }, 00:19:39.499 "ctrlr_data": { 00:19:39.499 "cntlid": 1, 00:19:39.499 "vendor_id": "0x8086", 00:19:39.499 "model_number": "SPDK bdev Controller", 00:19:39.499 "serial_number": "00000000000000000000", 00:19:39.499 "firmware_revision": "24.05", 00:19:39.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:39.499 "oacs": { 00:19:39.499 "security": 0, 00:19:39.499 "format": 0, 00:19:39.499 "firmware": 0, 00:19:39.499 "ns_manage": 0 00:19:39.499 }, 00:19:39.499 "multi_ctrlr": true, 00:19:39.499 "ana_reporting": false 00:19:39.499 }, 00:19:39.499 "vs": { 00:19:39.499 "nvme_version": "1.3" 00:19:39.499 }, 00:19:39.499 "ns_data": { 00:19:39.499 "id": 1, 00:19:39.499 "can_share": true 00:19:39.499 } 00:19:39.499 } 00:19:39.499 ], 00:19:39.499 "mp_policy": "active_passive" 00:19:39.499 } 00:19:39.499 } 00:19:39.499 ] 00:19:39.499 00:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.499 00:53:31 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:39.499 00:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.499 00:53:31 -- common/autotest_common.sh@10 -- # set +x 00:19:39.499 [2024-04-27 00:53:31.998364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:39.499 [2024-04-27 00:53:31.998415] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadf0f0 (9): Bad file descriptor 00:19:39.499 [2024-04-27 00:53:32.130144] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:39.499 00:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.499 00:53:32 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:39.499 00:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.499 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:19:39.500 [ 00:19:39.500 { 00:19:39.500 "name": "nvme0n1", 00:19:39.500 "aliases": [ 00:19:39.500 "8d017932-cd2b-4b26-93a6-f2def20f2cd4" 00:19:39.500 ], 00:19:39.500 "product_name": "NVMe disk", 00:19:39.500 "block_size": 512, 00:19:39.500 "num_blocks": 2097152, 00:19:39.500 "uuid": "8d017932-cd2b-4b26-93a6-f2def20f2cd4", 00:19:39.500 "assigned_rate_limits": { 00:19:39.500 "rw_ios_per_sec": 0, 00:19:39.500 "rw_mbytes_per_sec": 0, 00:19:39.500 "r_mbytes_per_sec": 0, 00:19:39.500 "w_mbytes_per_sec": 0 00:19:39.500 }, 00:19:39.500 "claimed": false, 00:19:39.500 "zoned": false, 00:19:39.500 "supported_io_types": { 00:19:39.500 "read": true, 00:19:39.500 "write": true, 00:19:39.500 "unmap": false, 00:19:39.500 "write_zeroes": true, 00:19:39.500 "flush": true, 00:19:39.500 "reset": true, 00:19:39.500 "compare": true, 00:19:39.500 "compare_and_write": true, 00:19:39.500 "abort": true, 00:19:39.500 "nvme_admin": true, 00:19:39.500 "nvme_io": true 00:19:39.500 }, 00:19:39.500 "memory_domains": [ 00:19:39.500 { 00:19:39.500 "dma_device_id": "system", 00:19:39.500 "dma_device_type": 1 00:19:39.500 } 00:19:39.500 ], 00:19:39.500 "driver_specific": { 00:19:39.500 "nvme": [ 00:19:39.500 { 00:19:39.500 "trid": { 00:19:39.500 "trtype": "TCP", 00:19:39.500 "adrfam": "IPv4", 00:19:39.500 "traddr": "10.0.0.2", 00:19:39.500 "trsvcid": "4420", 00:19:39.500 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:39.500 }, 00:19:39.500 "ctrlr_data": { 00:19:39.500 "cntlid": 2, 00:19:39.500 "vendor_id": "0x8086", 00:19:39.500 "model_number": "SPDK bdev Controller", 00:19:39.500 "serial_number": "00000000000000000000", 00:19:39.500 "firmware_revision": "24.05", 00:19:39.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:39.500 "oacs": { 00:19:39.500 "security": 0, 00:19:39.500 "format": 0, 00:19:39.500 "firmware": 0, 00:19:39.500 "ns_manage": 0 00:19:39.500 }, 00:19:39.500 "multi_ctrlr": true, 00:19:39.500 "ana_reporting": false 00:19:39.500 }, 00:19:39.500 "vs": { 00:19:39.500 "nvme_version": "1.3" 00:19:39.500 }, 00:19:39.500 "ns_data": { 00:19:39.500 "id": 1, 00:19:39.500 "can_share": true 00:19:39.500 } 00:19:39.500 } 00:19:39.500 ], 00:19:39.500 "mp_policy": "active_passive" 00:19:39.500 } 00:19:39.500 } 00:19:39.500 ] 00:19:39.500 00:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.500 00:53:32 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.500 00:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.500 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:19:39.500 00:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.500 00:53:32 -- host/async_init.sh@53 -- # mktemp 00:19:39.500 00:53:32 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.OQav0WBRje 00:19:39.500 00:53:32 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:39.500 00:53:32 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.OQav0WBRje 00:19:39.500 00:53:32 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:39.500 00:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.500 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:19:39.500 00:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.500 00:53:32 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:19:39.500 00:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.500 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:19:39.500 [2024-04-27 00:53:32.186928] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.500 [2024-04-27 00:53:32.187026] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:39.500 00:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.500 00:53:32 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OQav0WBRje 00:19:39.500 00:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.500 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:19:39.760 [2024-04-27 00:53:32.194945] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:39.760 00:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.760 00:53:32 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OQav0WBRje 00:19:39.760 00:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.760 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:19:39.760 [2024-04-27 00:53:32.202969] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:39.760 [2024-04-27 00:53:32.203003] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:39.760 nvme0n1 00:19:39.760 00:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.760 00:53:32 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:39.760 00:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.760 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:19:39.760 [ 00:19:39.760 { 00:19:39.760 "name": "nvme0n1", 00:19:39.760 "aliases": [ 00:19:39.760 "8d017932-cd2b-4b26-93a6-f2def20f2cd4" 00:19:39.760 ], 00:19:39.760 "product_name": "NVMe disk", 00:19:39.760 "block_size": 512, 00:19:39.760 "num_blocks": 2097152, 00:19:39.760 "uuid": "8d017932-cd2b-4b26-93a6-f2def20f2cd4", 00:19:39.760 "assigned_rate_limits": { 00:19:39.760 "rw_ios_per_sec": 0, 00:19:39.760 "rw_mbytes_per_sec": 0, 00:19:39.760 "r_mbytes_per_sec": 0, 00:19:39.760 "w_mbytes_per_sec": 0 00:19:39.760 }, 00:19:39.760 "claimed": false, 00:19:39.760 "zoned": false, 00:19:39.760 "supported_io_types": { 00:19:39.760 "read": true, 00:19:39.760 "write": true, 00:19:39.760 "unmap": false, 00:19:39.760 "write_zeroes": true, 00:19:39.760 "flush": true, 00:19:39.760 "reset": true, 00:19:39.760 "compare": true, 00:19:39.760 "compare_and_write": true, 00:19:39.760 "abort": true, 00:19:39.760 "nvme_admin": true, 00:19:39.760 "nvme_io": true 00:19:39.760 }, 00:19:39.760 "memory_domains": [ 00:19:39.760 { 00:19:39.760 "dma_device_id": "system", 00:19:39.760 "dma_device_type": 1 00:19:39.760 } 00:19:39.760 ], 00:19:39.760 "driver_specific": { 00:19:39.760 "nvme": [ 00:19:39.760 { 00:19:39.760 "trid": { 00:19:39.760 "trtype": "TCP", 00:19:39.760 "adrfam": "IPv4", 00:19:39.760 "traddr": "10.0.0.2", 00:19:39.760 "trsvcid": "4421", 00:19:39.760 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:39.760 }, 00:19:39.760 "ctrlr_data": { 00:19:39.760 "cntlid": 3, 00:19:39.760 "vendor_id": "0x8086", 00:19:39.760 "model_number": "SPDK bdev Controller", 00:19:39.760 "serial_number": "00000000000000000000", 00:19:39.760 "firmware_revision": "24.05", 00:19:39.760 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:39.760 "oacs": { 00:19:39.760 "security": 0, 00:19:39.760 "format": 0, 00:19:39.760 "firmware": 0, 00:19:39.760 "ns_manage": 0 00:19:39.760 }, 00:19:39.760 "multi_ctrlr": true, 00:19:39.760 "ana_reporting": false 00:19:39.760 }, 00:19:39.760 "vs": { 00:19:39.760 "nvme_version": "1.3" 00:19:39.760 }, 00:19:39.760 "ns_data": { 00:19:39.760 "id": 1, 00:19:39.760 "can_share": true 00:19:39.760 } 00:19:39.760 } 00:19:39.760 ], 00:19:39.760 "mp_policy": "active_passive" 00:19:39.760 } 00:19:39.760 } 00:19:39.760 ] 00:19:39.760 00:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.760 00:53:32 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.760 00:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.760 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:19:39.760 00:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.760 00:53:32 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.OQav0WBRje 00:19:39.760 00:53:32 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:39.760 00:53:32 -- host/async_init.sh@78 -- # nvmftestfini 00:19:39.760 00:53:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:39.760 00:53:32 -- nvmf/common.sh@117 -- # sync 00:19:39.760 00:53:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.760 00:53:32 -- nvmf/common.sh@120 -- # set +e 00:19:39.760 00:53:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.760 00:53:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.760 rmmod nvme_tcp 00:19:39.760 rmmod nvme_fabrics 00:19:39.760 rmmod nvme_keyring 00:19:39.760 00:53:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.760 00:53:32 -- nvmf/common.sh@124 -- # set -e 00:19:39.760 00:53:32 -- nvmf/common.sh@125 -- # return 0 00:19:39.760 00:53:32 -- nvmf/common.sh@478 -- # '[' -n 1741912 ']' 00:19:39.760 00:53:32 -- nvmf/common.sh@479 -- # killprocess 1741912 00:19:39.760 00:53:32 -- common/autotest_common.sh@936 -- # '[' -z 1741912 ']' 00:19:39.760 00:53:32 -- common/autotest_common.sh@940 -- # kill -0 1741912 00:19:39.760 00:53:32 -- common/autotest_common.sh@941 -- # uname 00:19:39.760 00:53:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:39.760 00:53:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1741912 00:19:39.760 00:53:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:39.760 00:53:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:39.760 00:53:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1741912' 00:19:39.760 killing process with pid 1741912 00:19:39.760 00:53:32 -- common/autotest_common.sh@955 -- # kill 1741912 00:19:39.760 [2024-04-27 00:53:32.397640] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:39.760 [2024-04-27 00:53:32.397664] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:39.760 00:53:32 -- common/autotest_common.sh@960 -- # wait 1741912 00:19:40.020 00:53:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:40.020 00:53:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:40.020 00:53:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:40.020 00:53:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.020 00:53:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.020 00:53:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.020 00:53:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.020 00:53:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.559 00:53:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:42.559 00:19:42.559 real 0m9.278s 00:19:42.559 user 0m3.508s 00:19:42.559 sys 0m4.303s 00:19:42.559 00:53:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:42.559 00:53:34 -- common/autotest_common.sh@10 -- # set +x 00:19:42.559 ************************************ 00:19:42.559 END TEST nvmf_async_init 00:19:42.559 ************************************ 00:19:42.559 00:53:34 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:42.559 00:53:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:42.559 00:53:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:42.559 00:53:34 -- common/autotest_common.sh@10 -- # set +x 00:19:42.559 ************************************ 00:19:42.559 START TEST dma 00:19:42.559 ************************************ 00:19:42.560 00:53:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:19:42.560 * Looking for test storage... 00:19:42.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:42.560 00:53:34 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.560 00:53:34 -- nvmf/common.sh@7 -- # uname -s 00:19:42.560 00:53:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.560 00:53:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.560 00:53:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.560 00:53:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.560 00:53:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.560 00:53:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.560 00:53:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.560 00:53:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.560 00:53:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.560 00:53:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.560 00:53:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.560 00:53:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.560 00:53:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.560 00:53:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.560 00:53:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.560 00:53:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.560 00:53:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.560 00:53:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.560 00:53:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.560 00:53:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.560 00:53:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.560 00:53:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.560 00:53:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.560 00:53:34 -- paths/export.sh@5 -- # export PATH 00:19:42.560 00:53:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.560 00:53:34 -- nvmf/common.sh@47 -- # : 0 00:19:42.560 00:53:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.560 00:53:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.560 00:53:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.560 00:53:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.560 00:53:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.560 00:53:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.560 00:53:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.560 00:53:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.560 00:53:34 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:19:42.560 00:53:34 -- host/dma.sh@13 -- # exit 0 00:19:42.560 00:19:42.560 real 0m0.113s 00:19:42.560 user 0m0.055s 00:19:42.560 sys 0m0.066s 00:19:42.560 00:53:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:42.560 00:53:34 -- common/autotest_common.sh@10 -- # set +x 00:19:42.560 ************************************ 00:19:42.560 END TEST dma 00:19:42.560 ************************************ 00:19:42.560 00:53:34 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:42.560 00:53:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:42.560 00:53:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:42.560 00:53:34 -- common/autotest_common.sh@10 -- # set +x 00:19:42.560 ************************************ 00:19:42.560 START TEST nvmf_identify 00:19:42.560 ************************************ 00:19:42.560 00:53:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:19:42.560 * Looking for test storage... 00:19:42.560 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:42.560 00:53:35 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.560 00:53:35 -- nvmf/common.sh@7 -- # uname -s 00:19:42.560 00:53:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.560 00:53:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.560 00:53:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.560 00:53:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.560 00:53:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.560 00:53:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.560 00:53:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.560 00:53:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.560 00:53:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.560 00:53:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.560 00:53:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.560 00:53:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:42.560 00:53:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.560 00:53:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.560 00:53:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.560 00:53:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.560 00:53:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.560 00:53:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.560 00:53:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.560 00:53:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.560 00:53:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.560 00:53:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.560 00:53:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.560 00:53:35 -- paths/export.sh@5 -- # export PATH 00:19:42.560 00:53:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.560 00:53:35 -- nvmf/common.sh@47 -- # : 0 00:19:42.560 00:53:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.560 00:53:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.560 00:53:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.560 00:53:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.560 00:53:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.560 00:53:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.560 00:53:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.560 00:53:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.560 00:53:35 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:42.560 00:53:35 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:42.560 00:53:35 -- host/identify.sh@14 -- # nvmftestinit 00:19:42.560 00:53:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:42.560 00:53:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.560 00:53:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:42.560 00:53:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:42.560 00:53:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:42.560 00:53:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.560 00:53:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.560 00:53:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.560 00:53:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:42.560 00:53:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:42.560 00:53:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:42.560 00:53:35 -- common/autotest_common.sh@10 -- # set +x 00:19:47.842 00:53:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:47.842 00:53:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:47.842 00:53:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:47.842 00:53:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:47.842 00:53:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:47.842 00:53:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:47.842 00:53:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:47.842 00:53:40 -- nvmf/common.sh@295 -- # net_devs=() 00:19:47.842 00:53:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:47.842 00:53:40 -- nvmf/common.sh@296 -- # e810=() 00:19:47.842 00:53:40 -- nvmf/common.sh@296 -- # local -ga e810 00:19:47.842 00:53:40 -- nvmf/common.sh@297 -- # x722=() 00:19:47.842 00:53:40 -- nvmf/common.sh@297 -- # local -ga x722 00:19:47.842 00:53:40 -- nvmf/common.sh@298 -- # mlx=() 00:19:47.842 00:53:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:47.842 00:53:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.842 00:53:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:47.842 00:53:40 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:47.842 00:53:40 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:47.842 00:53:40 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:47.842 00:53:40 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:47.842 00:53:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:47.842 00:53:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.842 00:53:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:47.842 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:47.842 00:53:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.842 00:53:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.843 00:53:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:47.843 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:47.843 00:53:40 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:47.843 00:53:40 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.843 00:53:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.843 00:53:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:47.843 00:53:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.843 00:53:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:47.843 Found net devices under 0000:86:00.0: cvl_0_0 00:19:47.843 00:53:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.843 00:53:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.843 00:53:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.843 00:53:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:47.843 00:53:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.843 00:53:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:47.843 Found net devices under 0000:86:00.1: cvl_0_1 00:19:47.843 00:53:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.843 00:53:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:47.843 00:53:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:47.843 00:53:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:47.843 00:53:40 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:47.843 00:53:40 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.843 00:53:40 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.843 00:53:40 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.843 00:53:40 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:47.843 00:53:40 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.843 00:53:40 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.843 00:53:40 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:47.843 00:53:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.843 00:53:40 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.843 00:53:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:47.843 00:53:40 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:47.843 00:53:40 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.843 00:53:40 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:48.102 00:53:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.102 00:53:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.102 00:53:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:48.102 00:53:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.102 00:53:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.102 00:53:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.102 00:53:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:48.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:19:48.102 00:19:48.102 --- 10.0.0.2 ping statistics --- 00:19:48.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.102 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:19:48.102 00:53:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:19:48.102 00:19:48.102 --- 10.0.0.1 ping statistics --- 00:19:48.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.102 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:19:48.102 00:53:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.102 00:53:40 -- nvmf/common.sh@411 -- # return 0 00:19:48.102 00:53:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:48.102 00:53:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.102 00:53:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:48.102 00:53:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:48.102 00:53:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.102 00:53:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:48.102 00:53:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:48.102 00:53:40 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:19:48.102 00:53:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:48.102 00:53:40 -- common/autotest_common.sh@10 -- # set +x 00:19:48.102 00:53:40 -- host/identify.sh@19 -- # nvmfpid=1745763 00:19:48.102 00:53:40 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:48.102 00:53:40 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.102 00:53:40 -- host/identify.sh@23 -- # waitforlisten 1745763 00:19:48.102 00:53:40 -- common/autotest_common.sh@817 -- # '[' -z 1745763 ']' 00:19:48.102 00:53:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.102 00:53:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:48.102 00:53:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.102 00:53:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:48.102 00:53:40 -- common/autotest_common.sh@10 -- # set +x 00:19:48.102 [2024-04-27 00:53:40.782667] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:48.102 [2024-04-27 00:53:40.782726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.362 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.362 [2024-04-27 00:53:40.844591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.362 [2024-04-27 00:53:40.918191] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.362 [2024-04-27 00:53:40.918234] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.362 [2024-04-27 00:53:40.918241] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.362 [2024-04-27 00:53:40.918248] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.362 [2024-04-27 00:53:40.918253] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.362 [2024-04-27 00:53:40.918304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.362 [2024-04-27 00:53:40.918400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.362 [2024-04-27 00:53:40.918486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.362 [2024-04-27 00:53:40.918487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.931 00:53:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:48.931 00:53:41 -- common/autotest_common.sh@850 -- # return 0 00:19:48.931 00:53:41 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:48.931 00:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:48.931 00:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:48.931 [2024-04-27 00:53:41.591914] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.931 00:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.931 00:53:41 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:19:48.931 00:53:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:48.931 00:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:49.193 00:53:41 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:49.193 00:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.193 00:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:49.193 Malloc0 00:19:49.193 00:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.193 00:53:41 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:49.193 00:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.193 00:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:49.193 00:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.193 00:53:41 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:19:49.193 00:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.193 00:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:49.193 00:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.193 00:53:41 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.193 00:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.193 00:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:49.193 [2024-04-27 00:53:41.680031] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.193 00:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.193 00:53:41 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:49.193 00:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.193 00:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:49.193 00:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.193 00:53:41 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:19:49.193 00:53:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.193 00:53:41 -- common/autotest_common.sh@10 -- # set +x 00:19:49.193 [2024-04-27 00:53:41.695853] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:49.193 [ 00:19:49.193 { 00:19:49.193 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:49.193 "subtype": "Discovery", 00:19:49.193 "listen_addresses": [ 00:19:49.193 { 00:19:49.193 "transport": "TCP", 00:19:49.193 "trtype": "TCP", 00:19:49.193 "adrfam": "IPv4", 00:19:49.193 "traddr": "10.0.0.2", 00:19:49.193 "trsvcid": "4420" 00:19:49.193 } 00:19:49.193 ], 00:19:49.193 "allow_any_host": true, 00:19:49.193 "hosts": [] 00:19:49.193 }, 00:19:49.193 { 00:19:49.193 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.193 "subtype": "NVMe", 00:19:49.193 "listen_addresses": [ 00:19:49.193 { 00:19:49.193 "transport": "TCP", 00:19:49.193 "trtype": "TCP", 00:19:49.193 "adrfam": "IPv4", 00:19:49.193 "traddr": "10.0.0.2", 00:19:49.193 "trsvcid": "4420" 00:19:49.193 } 00:19:49.193 ], 00:19:49.193 "allow_any_host": true, 00:19:49.193 "hosts": [], 00:19:49.193 "serial_number": "SPDK00000000000001", 00:19:49.193 "model_number": "SPDK bdev Controller", 00:19:49.193 "max_namespaces": 32, 00:19:49.193 "min_cntlid": 1, 00:19:49.193 "max_cntlid": 65519, 00:19:49.193 "namespaces": [ 00:19:49.193 { 00:19:49.193 "nsid": 1, 00:19:49.193 "bdev_name": "Malloc0", 00:19:49.193 "name": "Malloc0", 00:19:49.193 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:19:49.193 "eui64": "ABCDEF0123456789", 00:19:49.193 "uuid": "0f24e31d-d9fe-4526-92a2-6994043bcd2e" 00:19:49.193 } 00:19:49.193 ] 00:19:49.193 } 00:19:49.193 ] 00:19:49.193 00:53:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.193 00:53:41 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:19:49.193 [2024-04-27 00:53:41.731028] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:49.193 [2024-04-27 00:53:41.731064] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745977 ] 00:19:49.193 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.193 [2024-04-27 00:53:41.759603] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:19:49.194 [2024-04-27 00:53:41.759651] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:49.194 [2024-04-27 00:53:41.759656] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:49.194 [2024-04-27 00:53:41.759665] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:49.194 [2024-04-27 00:53:41.759673] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:49.194 [2024-04-27 00:53:41.760208] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:19:49.194 [2024-04-27 00:53:41.760244] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd1ad10 0 00:19:49.194 [2024-04-27 00:53:41.767084] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:49.194 [2024-04-27 00:53:41.767101] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:49.194 [2024-04-27 00:53:41.767107] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:49.194 [2024-04-27 00:53:41.767111] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:49.194 [2024-04-27 00:53:41.767149] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.767155] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.767159] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd1ad10) 00:19:49.194 [2024-04-27 00:53:41.767170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:49.194 [2024-04-27 00:53:41.767186] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a60, cid 0, qid 0 00:19:49.194 [2024-04-27 00:53:41.775084] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.194 [2024-04-27 00:53:41.775092] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.194 [2024-04-27 00:53:41.775095] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775100] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82a60) on tqpair=0xd1ad10 00:19:49.194 [2024-04-27 00:53:41.775111] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:49.194 [2024-04-27 00:53:41.775117] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:19:49.194 [2024-04-27 00:53:41.775122] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:19:49.194 [2024-04-27 00:53:41.775135] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775139] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775142] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd1ad10) 00:19:49.194 [2024-04-27 00:53:41.775148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.194 [2024-04-27 00:53:41.775161] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a60, cid 0, qid 0 00:19:49.194 [2024-04-27 00:53:41.775370] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.194 [2024-04-27 00:53:41.775383] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.194 [2024-04-27 00:53:41.775386] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775390] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82a60) on tqpair=0xd1ad10 00:19:49.194 [2024-04-27 00:53:41.775396] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:19:49.194 [2024-04-27 00:53:41.775406] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:19:49.194 [2024-04-27 00:53:41.775413] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775417] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775420] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd1ad10) 00:19:49.194 [2024-04-27 00:53:41.775428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.194 [2024-04-27 00:53:41.775442] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a60, cid 0, qid 0 00:19:49.194 [2024-04-27 00:53:41.775581] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.194 [2024-04-27 00:53:41.775590] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.194 [2024-04-27 00:53:41.775593] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775597] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82a60) on tqpair=0xd1ad10 00:19:49.194 [2024-04-27 00:53:41.775603] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:19:49.194 [2024-04-27 00:53:41.775614] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:19:49.194 [2024-04-27 00:53:41.775622] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775629] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd1ad10) 00:19:49.194 [2024-04-27 00:53:41.775636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.194 [2024-04-27 00:53:41.775648] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a60, cid 0, qid 0 00:19:49.194 [2024-04-27 00:53:41.775954] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.194 [2024-04-27 00:53:41.775959] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.194 [2024-04-27 00:53:41.775962] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775965] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82a60) on tqpair=0xd1ad10 00:19:49.194 [2024-04-27 00:53:41.775970] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:49.194 [2024-04-27 00:53:41.775979] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775983] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.775986] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd1ad10) 00:19:49.194 [2024-04-27 00:53:41.775992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.194 [2024-04-27 00:53:41.776002] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a60, cid 0, qid 0 00:19:49.194 [2024-04-27 00:53:41.776149] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.194 [2024-04-27 00:53:41.776159] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.194 [2024-04-27 00:53:41.776163] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.776166] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82a60) on tqpair=0xd1ad10 00:19:49.194 [2024-04-27 00:53:41.776171] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:19:49.194 [2024-04-27 00:53:41.776176] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:19:49.194 [2024-04-27 00:53:41.776184] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:49.194 [2024-04-27 00:53:41.776290] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:19:49.194 [2024-04-27 00:53:41.776294] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:49.194 [2024-04-27 00:53:41.776303] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.776307] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.776310] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd1ad10) 00:19:49.194 [2024-04-27 00:53:41.776317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.194 [2024-04-27 00:53:41.776329] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a60, cid 0, qid 0 00:19:49.194 [2024-04-27 00:53:41.776464] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.194 [2024-04-27 00:53:41.776474] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.194 [2024-04-27 00:53:41.776477] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.776483] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82a60) on tqpair=0xd1ad10 00:19:49.194 [2024-04-27 00:53:41.776489] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:49.194 [2024-04-27 00:53:41.776499] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.776503] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.776506] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd1ad10) 00:19:49.194 [2024-04-27 00:53:41.776512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.194 [2024-04-27 00:53:41.776524] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a60, cid 0, qid 0 00:19:49.194 [2024-04-27 00:53:41.776657] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.194 [2024-04-27 00:53:41.776667] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.194 [2024-04-27 00:53:41.776670] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.776673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82a60) on tqpair=0xd1ad10 00:19:49.194 [2024-04-27 00:53:41.776678] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:49.194 [2024-04-27 00:53:41.776683] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:19:49.194 [2024-04-27 00:53:41.776691] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:19:49.194 [2024-04-27 00:53:41.776700] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:19:49.194 [2024-04-27 00:53:41.776710] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.776714] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd1ad10) 00:19:49.194 [2024-04-27 00:53:41.776721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.194 [2024-04-27 00:53:41.776733] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a60, cid 0, qid 0 00:19:49.194 [2024-04-27 00:53:41.776887] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.194 [2024-04-27 00:53:41.776897] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.194 [2024-04-27 00:53:41.776900] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.194 [2024-04-27 00:53:41.776904] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd1ad10): datao=0, datal=4096, cccid=0 00:19:49.195 [2024-04-27 00:53:41.776908] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd82a60) on tqpair(0xd1ad10): expected_datao=0, payload_size=4096 00:19:49.195 [2024-04-27 00:53:41.776913] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777128] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777133] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777242] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.195 [2024-04-27 00:53:41.777251] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.195 [2024-04-27 00:53:41.777254] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777258] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82a60) on tqpair=0xd1ad10 00:19:49.195 [2024-04-27 00:53:41.777267] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:19:49.195 [2024-04-27 00:53:41.777271] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:19:49.195 [2024-04-27 00:53:41.777279] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:19:49.195 [2024-04-27 00:53:41.777284] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:19:49.195 [2024-04-27 00:53:41.777288] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:19:49.195 [2024-04-27 00:53:41.777292] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:19:49.195 [2024-04-27 00:53:41.777301] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:19:49.195 [2024-04-27 00:53:41.777308] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777312] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777315] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd1ad10) 00:19:49.195 [2024-04-27 00:53:41.777322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.195 [2024-04-27 00:53:41.777335] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a60, cid 0, qid 0 00:19:49.195 [2024-04-27 00:53:41.777473] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.195 [2024-04-27 00:53:41.777483] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.195 [2024-04-27 00:53:41.777486] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777489] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82a60) on tqpair=0xd1ad10 00:19:49.195 [2024-04-27 00:53:41.777497] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777504] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd1ad10) 00:19:49.195 [2024-04-27 00:53:41.777510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.195 [2024-04-27 00:53:41.777516] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777519] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777523] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd1ad10) 00:19:49.195 [2024-04-27 00:53:41.777527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.195 [2024-04-27 00:53:41.777533] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777536] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777539] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd1ad10) 00:19:49.195 [2024-04-27 00:53:41.777544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.195 [2024-04-27 00:53:41.777549] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777553] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777556] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd1ad10) 00:19:49.195 [2024-04-27 00:53:41.777561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.195 [2024-04-27 00:53:41.777565] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:19:49.195 [2024-04-27 00:53:41.777578] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:49.195 [2024-04-27 00:53:41.777584] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777590] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd1ad10) 00:19:49.195 [2024-04-27 00:53:41.777596] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.195 [2024-04-27 00:53:41.777609] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82a60, cid 0, qid 0 00:19:49.195 [2024-04-27 00:53:41.777614] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82bc0, cid 1, qid 0 00:19:49.195 [2024-04-27 00:53:41.777618] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82d20, cid 2, qid 0 00:19:49.195 [2024-04-27 00:53:41.777622] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82e80, cid 3, qid 0 00:19:49.195 [2024-04-27 00:53:41.777626] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82fe0, cid 4, qid 0 00:19:49.195 [2024-04-27 00:53:41.777797] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.195 [2024-04-27 00:53:41.777806] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.195 [2024-04-27 00:53:41.777809] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777813] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82fe0) on tqpair=0xd1ad10 00:19:49.195 [2024-04-27 00:53:41.777819] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:19:49.195 [2024-04-27 00:53:41.777824] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:19:49.195 [2024-04-27 00:53:41.777835] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.777839] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd1ad10) 00:19:49.195 [2024-04-27 00:53:41.777845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.195 [2024-04-27 00:53:41.777858] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82fe0, cid 4, qid 0 00:19:49.195 [2024-04-27 00:53:41.778003] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.195 [2024-04-27 00:53:41.778013] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.195 [2024-04-27 00:53:41.778017] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778020] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd1ad10): datao=0, datal=4096, cccid=4 00:19:49.195 [2024-04-27 00:53:41.778024] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd82fe0) on tqpair(0xd1ad10): expected_datao=0, payload_size=4096 00:19:49.195 [2024-04-27 00:53:41.778028] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778035] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778038] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778267] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.195 [2024-04-27 00:53:41.778273] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.195 [2024-04-27 00:53:41.778276] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778280] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82fe0) on tqpair=0xd1ad10 00:19:49.195 [2024-04-27 00:53:41.778294] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:19:49.195 [2024-04-27 00:53:41.778312] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778316] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd1ad10) 00:19:49.195 [2024-04-27 00:53:41.778323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.195 [2024-04-27 00:53:41.778329] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778335] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778338] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd1ad10) 00:19:49.195 [2024-04-27 00:53:41.778344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.195 [2024-04-27 00:53:41.778360] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82fe0, cid 4, qid 0 00:19:49.195 [2024-04-27 00:53:41.778365] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd83140, cid 5, qid 0 00:19:49.195 [2024-04-27 00:53:41.778537] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.195 [2024-04-27 00:53:41.778547] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.195 [2024-04-27 00:53:41.778550] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778553] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd1ad10): datao=0, datal=1024, cccid=4 00:19:49.195 [2024-04-27 00:53:41.778558] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd82fe0) on tqpair(0xd1ad10): expected_datao=0, payload_size=1024 00:19:49.195 [2024-04-27 00:53:41.778562] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778568] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778571] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778576] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.195 [2024-04-27 00:53:41.778581] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.195 [2024-04-27 00:53:41.778584] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.778588] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd83140) on tqpair=0xd1ad10 00:19:49.195 [2024-04-27 00:53:41.819234] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.195 [2024-04-27 00:53:41.819249] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.195 [2024-04-27 00:53:41.819253] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.819257] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82fe0) on tqpair=0xd1ad10 00:19:49.195 [2024-04-27 00:53:41.819270] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.195 [2024-04-27 00:53:41.819273] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd1ad10) 00:19:49.195 [2024-04-27 00:53:41.819280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.195 [2024-04-27 00:53:41.819298] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82fe0, cid 4, qid 0 00:19:49.195 [2024-04-27 00:53:41.819448] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.196 [2024-04-27 00:53:41.819458] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.196 [2024-04-27 00:53:41.819461] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.819464] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd1ad10): datao=0, datal=3072, cccid=4 00:19:49.196 [2024-04-27 00:53:41.819468] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd82fe0) on tqpair(0xd1ad10): expected_datao=0, payload_size=3072 00:19:49.196 [2024-04-27 00:53:41.819472] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.819479] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.819482] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.819719] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.196 [2024-04-27 00:53:41.819724] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.196 [2024-04-27 00:53:41.819727] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.819730] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82fe0) on tqpair=0xd1ad10 00:19:49.196 [2024-04-27 00:53:41.819743] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.819746] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd1ad10) 00:19:49.196 [2024-04-27 00:53:41.819753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.196 [2024-04-27 00:53:41.819768] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82fe0, cid 4, qid 0 00:19:49.196 [2024-04-27 00:53:41.819915] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.196 [2024-04-27 00:53:41.819924] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.196 [2024-04-27 00:53:41.819928] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.819931] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd1ad10): datao=0, datal=8, cccid=4 00:19:49.196 [2024-04-27 00:53:41.819935] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd82fe0) on tqpair(0xd1ad10): expected_datao=0, payload_size=8 00:19:49.196 [2024-04-27 00:53:41.819939] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.819945] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.819948] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.860214] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.196 [2024-04-27 00:53:41.860228] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.196 [2024-04-27 00:53:41.860231] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.196 [2024-04-27 00:53:41.860235] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82fe0) on tqpair=0xd1ad10 00:19:49.196 ===================================================== 00:19:49.196 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:49.196 ===================================================== 00:19:49.196 Controller Capabilities/Features 00:19:49.196 ================================ 00:19:49.196 Vendor ID: 0000 00:19:49.196 Subsystem Vendor ID: 0000 00:19:49.196 Serial Number: .................... 00:19:49.196 Model Number: ........................................ 00:19:49.196 Firmware Version: 24.05 00:19:49.196 Recommended Arb Burst: 0 00:19:49.196 IEEE OUI Identifier: 00 00 00 00:19:49.196 Multi-path I/O 00:19:49.196 May have multiple subsystem ports: No 00:19:49.196 May have multiple controllers: No 00:19:49.196 Associated with SR-IOV VF: No 00:19:49.196 Max Data Transfer Size: 131072 00:19:49.196 Max Number of Namespaces: 0 00:19:49.196 Max Number of I/O Queues: 1024 00:19:49.196 NVMe Specification Version (VS): 1.3 00:19:49.196 NVMe Specification Version (Identify): 1.3 00:19:49.196 Maximum Queue Entries: 128 00:19:49.196 Contiguous Queues Required: Yes 00:19:49.196 Arbitration Mechanisms Supported 00:19:49.196 Weighted Round Robin: Not Supported 00:19:49.196 Vendor Specific: Not Supported 00:19:49.196 Reset Timeout: 15000 ms 00:19:49.196 Doorbell Stride: 4 bytes 00:19:49.196 NVM Subsystem Reset: Not Supported 00:19:49.196 Command Sets Supported 00:19:49.196 NVM Command Set: Supported 00:19:49.196 Boot Partition: Not Supported 00:19:49.196 Memory Page Size Minimum: 4096 bytes 00:19:49.196 Memory Page Size Maximum: 4096 bytes 00:19:49.196 Persistent Memory Region: Not Supported 00:19:49.196 Optional Asynchronous Events Supported 00:19:49.196 Namespace Attribute Notices: Not Supported 00:19:49.196 Firmware Activation Notices: Not Supported 00:19:49.196 ANA Change Notices: Not Supported 00:19:49.196 PLE Aggregate Log Change Notices: Not Supported 00:19:49.196 LBA Status Info Alert Notices: Not Supported 00:19:49.196 EGE Aggregate Log Change Notices: Not Supported 00:19:49.196 Normal NVM Subsystem Shutdown event: Not Supported 00:19:49.196 Zone Descriptor Change Notices: Not Supported 00:19:49.196 Discovery Log Change Notices: Supported 00:19:49.196 Controller Attributes 00:19:49.196 128-bit Host Identifier: Not Supported 00:19:49.196 Non-Operational Permissive Mode: Not Supported 00:19:49.196 NVM Sets: Not Supported 00:19:49.196 Read Recovery Levels: Not Supported 00:19:49.196 Endurance Groups: Not Supported 00:19:49.196 Predictable Latency Mode: Not Supported 00:19:49.196 Traffic Based Keep ALive: Not Supported 00:19:49.196 Namespace Granularity: Not Supported 00:19:49.196 SQ Associations: Not Supported 00:19:49.196 UUID List: Not Supported 00:19:49.196 Multi-Domain Subsystem: Not Supported 00:19:49.196 Fixed Capacity Management: Not Supported 00:19:49.196 Variable Capacity Management: Not Supported 00:19:49.196 Delete Endurance Group: Not Supported 00:19:49.196 Delete NVM Set: Not Supported 00:19:49.196 Extended LBA Formats Supported: Not Supported 00:19:49.196 Flexible Data Placement Supported: Not Supported 00:19:49.196 00:19:49.196 Controller Memory Buffer Support 00:19:49.196 ================================ 00:19:49.196 Supported: No 00:19:49.196 00:19:49.196 Persistent Memory Region Support 00:19:49.196 ================================ 00:19:49.196 Supported: No 00:19:49.196 00:19:49.196 Admin Command Set Attributes 00:19:49.196 ============================ 00:19:49.196 Security Send/Receive: Not Supported 00:19:49.196 Format NVM: Not Supported 00:19:49.196 Firmware Activate/Download: Not Supported 00:19:49.196 Namespace Management: Not Supported 00:19:49.196 Device Self-Test: Not Supported 00:19:49.196 Directives: Not Supported 00:19:49.196 NVMe-MI: Not Supported 00:19:49.196 Virtualization Management: Not Supported 00:19:49.196 Doorbell Buffer Config: Not Supported 00:19:49.196 Get LBA Status Capability: Not Supported 00:19:49.196 Command & Feature Lockdown Capability: Not Supported 00:19:49.196 Abort Command Limit: 1 00:19:49.196 Async Event Request Limit: 4 00:19:49.196 Number of Firmware Slots: N/A 00:19:49.196 Firmware Slot 1 Read-Only: N/A 00:19:49.196 Firmware Activation Without Reset: N/A 00:19:49.196 Multiple Update Detection Support: N/A 00:19:49.196 Firmware Update Granularity: No Information Provided 00:19:49.196 Per-Namespace SMART Log: No 00:19:49.196 Asymmetric Namespace Access Log Page: Not Supported 00:19:49.196 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:49.196 Command Effects Log Page: Not Supported 00:19:49.196 Get Log Page Extended Data: Supported 00:19:49.196 Telemetry Log Pages: Not Supported 00:19:49.196 Persistent Event Log Pages: Not Supported 00:19:49.196 Supported Log Pages Log Page: May Support 00:19:49.196 Commands Supported & Effects Log Page: Not Supported 00:19:49.196 Feature Identifiers & Effects Log Page:May Support 00:19:49.196 NVMe-MI Commands & Effects Log Page: May Support 00:19:49.196 Data Area 4 for Telemetry Log: Not Supported 00:19:49.196 Error Log Page Entries Supported: 128 00:19:49.196 Keep Alive: Not Supported 00:19:49.196 00:19:49.196 NVM Command Set Attributes 00:19:49.196 ========================== 00:19:49.196 Submission Queue Entry Size 00:19:49.196 Max: 1 00:19:49.196 Min: 1 00:19:49.196 Completion Queue Entry Size 00:19:49.196 Max: 1 00:19:49.196 Min: 1 00:19:49.196 Number of Namespaces: 0 00:19:49.196 Compare Command: Not Supported 00:19:49.196 Write Uncorrectable Command: Not Supported 00:19:49.196 Dataset Management Command: Not Supported 00:19:49.196 Write Zeroes Command: Not Supported 00:19:49.196 Set Features Save Field: Not Supported 00:19:49.196 Reservations: Not Supported 00:19:49.196 Timestamp: Not Supported 00:19:49.196 Copy: Not Supported 00:19:49.196 Volatile Write Cache: Not Present 00:19:49.196 Atomic Write Unit (Normal): 1 00:19:49.196 Atomic Write Unit (PFail): 1 00:19:49.196 Atomic Compare & Write Unit: 1 00:19:49.196 Fused Compare & Write: Supported 00:19:49.196 Scatter-Gather List 00:19:49.196 SGL Command Set: Supported 00:19:49.196 SGL Keyed: Supported 00:19:49.196 SGL Bit Bucket Descriptor: Not Supported 00:19:49.196 SGL Metadata Pointer: Not Supported 00:19:49.196 Oversized SGL: Not Supported 00:19:49.196 SGL Metadata Address: Not Supported 00:19:49.196 SGL Offset: Supported 00:19:49.196 Transport SGL Data Block: Not Supported 00:19:49.196 Replay Protected Memory Block: Not Supported 00:19:49.196 00:19:49.196 Firmware Slot Information 00:19:49.196 ========================= 00:19:49.196 Active slot: 0 00:19:49.196 00:19:49.196 00:19:49.196 Error Log 00:19:49.196 ========= 00:19:49.196 00:19:49.196 Active Namespaces 00:19:49.196 ================= 00:19:49.196 Discovery Log Page 00:19:49.196 ================== 00:19:49.196 Generation Counter: 2 00:19:49.197 Number of Records: 2 00:19:49.197 Record Format: 0 00:19:49.197 00:19:49.197 Discovery Log Entry 0 00:19:49.197 ---------------------- 00:19:49.197 Transport Type: 3 (TCP) 00:19:49.197 Address Family: 1 (IPv4) 00:19:49.197 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:49.197 Entry Flags: 00:19:49.197 Duplicate Returned Information: 1 00:19:49.197 Explicit Persistent Connection Support for Discovery: 1 00:19:49.197 Transport Requirements: 00:19:49.197 Secure Channel: Not Required 00:19:49.197 Port ID: 0 (0x0000) 00:19:49.197 Controller ID: 65535 (0xffff) 00:19:49.197 Admin Max SQ Size: 128 00:19:49.197 Transport Service Identifier: 4420 00:19:49.197 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:49.197 Transport Address: 10.0.0.2 00:19:49.197 Discovery Log Entry 1 00:19:49.197 ---------------------- 00:19:49.197 Transport Type: 3 (TCP) 00:19:49.197 Address Family: 1 (IPv4) 00:19:49.197 Subsystem Type: 2 (NVM Subsystem) 00:19:49.197 Entry Flags: 00:19:49.197 Duplicate Returned Information: 0 00:19:49.197 Explicit Persistent Connection Support for Discovery: 0 00:19:49.197 Transport Requirements: 00:19:49.197 Secure Channel: Not Required 00:19:49.197 Port ID: 0 (0x0000) 00:19:49.197 Controller ID: 65535 (0xffff) 00:19:49.197 Admin Max SQ Size: 128 00:19:49.197 Transport Service Identifier: 4420 00:19:49.197 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:19:49.197 Transport Address: 10.0.0.2 [2024-04-27 00:53:41.860318] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:19:49.197 [2024-04-27 00:53:41.860332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.197 [2024-04-27 00:53:41.860338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.197 [2024-04-27 00:53:41.860343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.197 [2024-04-27 00:53:41.860348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.197 [2024-04-27 00:53:41.860356] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860363] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd1ad10) 00:19:49.197 [2024-04-27 00:53:41.860370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.197 [2024-04-27 00:53:41.860383] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82e80, cid 3, qid 0 00:19:49.197 [2024-04-27 00:53:41.860517] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.197 [2024-04-27 00:53:41.860527] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.197 [2024-04-27 00:53:41.860531] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860534] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82e80) on tqpair=0xd1ad10 00:19:49.197 [2024-04-27 00:53:41.860541] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860545] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860548] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd1ad10) 00:19:49.197 [2024-04-27 00:53:41.860555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.197 [2024-04-27 00:53:41.860574] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82e80, cid 3, qid 0 00:19:49.197 [2024-04-27 00:53:41.860713] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.197 [2024-04-27 00:53:41.860723] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.197 [2024-04-27 00:53:41.860726] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860729] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82e80) on tqpair=0xd1ad10 00:19:49.197 [2024-04-27 00:53:41.860734] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:19:49.197 [2024-04-27 00:53:41.860739] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:19:49.197 [2024-04-27 00:53:41.860749] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860753] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860756] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd1ad10) 00:19:49.197 [2024-04-27 00:53:41.860762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.197 [2024-04-27 00:53:41.860774] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82e80, cid 3, qid 0 00:19:49.197 [2024-04-27 00:53:41.860906] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.197 [2024-04-27 00:53:41.860916] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.197 [2024-04-27 00:53:41.860919] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860922] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82e80) on tqpair=0xd1ad10 00:19:49.197 [2024-04-27 00:53:41.860934] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860938] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.860941] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd1ad10) 00:19:49.197 [2024-04-27 00:53:41.860947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.197 [2024-04-27 00:53:41.860959] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82e80, cid 3, qid 0 00:19:49.197 [2024-04-27 00:53:41.861107] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.197 [2024-04-27 00:53:41.861118] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.197 [2024-04-27 00:53:41.861121] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861124] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82e80) on tqpair=0xd1ad10 00:19:49.197 [2024-04-27 00:53:41.861136] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861139] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861143] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd1ad10) 00:19:49.197 [2024-04-27 00:53:41.861149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.197 [2024-04-27 00:53:41.861162] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82e80, cid 3, qid 0 00:19:49.197 [2024-04-27 00:53:41.861467] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.197 [2024-04-27 00:53:41.861473] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.197 [2024-04-27 00:53:41.861475] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861479] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82e80) on tqpair=0xd1ad10 00:19:49.197 [2024-04-27 00:53:41.861488] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861492] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861495] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd1ad10) 00:19:49.197 [2024-04-27 00:53:41.861503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.197 [2024-04-27 00:53:41.861513] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82e80, cid 3, qid 0 00:19:49.197 [2024-04-27 00:53:41.861664] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.197 [2024-04-27 00:53:41.861673] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.197 [2024-04-27 00:53:41.861676] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861680] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82e80) on tqpair=0xd1ad10 00:19:49.197 [2024-04-27 00:53:41.861691] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861695] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861698] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd1ad10) 00:19:49.197 [2024-04-27 00:53:41.861705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.197 [2024-04-27 00:53:41.861717] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82e80, cid 3, qid 0 00:19:49.197 [2024-04-27 00:53:41.861848] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.197 [2024-04-27 00:53:41.861858] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.197 [2024-04-27 00:53:41.861861] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861864] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82e80) on tqpair=0xd1ad10 00:19:49.197 [2024-04-27 00:53:41.861875] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861879] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.861882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd1ad10) 00:19:49.197 [2024-04-27 00:53:41.861889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.197 [2024-04-27 00:53:41.861900] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82e80, cid 3, qid 0 00:19:49.197 [2024-04-27 00:53:41.862043] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.197 [2024-04-27 00:53:41.862052] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.197 [2024-04-27 00:53:41.862055] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.862059] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82e80) on tqpair=0xd1ad10 00:19:49.197 [2024-04-27 00:53:41.866074] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.866080] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.197 [2024-04-27 00:53:41.866083] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd1ad10) 00:19:49.197 [2024-04-27 00:53:41.866090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.197 [2024-04-27 00:53:41.866103] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd82e80, cid 3, qid 0 00:19:49.198 [2024-04-27 00:53:41.866328] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.198 [2024-04-27 00:53:41.866338] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.198 [2024-04-27 00:53:41.866341] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.198 [2024-04-27 00:53:41.866345] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd82e80) on tqpair=0xd1ad10 00:19:49.198 [2024-04-27 00:53:41.866354] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:19:49.198 00:19:49.198 00:53:41 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:19:49.460 [2024-04-27 00:53:41.901986] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:49.460 [2024-04-27 00:53:41.902032] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745979 ] 00:19:49.460 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.461 [2024-04-27 00:53:41.931321] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:19:49.461 [2024-04-27 00:53:41.931363] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:19:49.461 [2024-04-27 00:53:41.931368] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:19:49.461 [2024-04-27 00:53:41.931379] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:19:49.461 [2024-04-27 00:53:41.931386] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:19:49.461 [2024-04-27 00:53:41.931912] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:19:49.461 [2024-04-27 00:53:41.931934] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fd1d10 0 00:19:49.461 [2024-04-27 00:53:41.938085] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:19:49.461 [2024-04-27 00:53:41.938103] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:19:49.461 [2024-04-27 00:53:41.938107] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:19:49.461 [2024-04-27 00:53:41.938110] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:19:49.461 [2024-04-27 00:53:41.938141] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.938146] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.938149] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd1d10) 00:19:49.461 [2024-04-27 00:53:41.938159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:19:49.461 [2024-04-27 00:53:41.938174] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039a60, cid 0, qid 0 00:19:49.461 [2024-04-27 00:53:41.946079] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.461 [2024-04-27 00:53:41.946087] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.461 [2024-04-27 00:53:41.946090] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946094] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039a60) on tqpair=0x1fd1d10 00:19:49.461 [2024-04-27 00:53:41.946105] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:19:49.461 [2024-04-27 00:53:41.946111] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:19:49.461 [2024-04-27 00:53:41.946116] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:19:49.461 [2024-04-27 00:53:41.946126] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946130] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946133] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd1d10) 00:19:49.461 [2024-04-27 00:53:41.946140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.461 [2024-04-27 00:53:41.946152] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039a60, cid 0, qid 0 00:19:49.461 [2024-04-27 00:53:41.946389] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.461 [2024-04-27 00:53:41.946404] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.461 [2024-04-27 00:53:41.946410] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946414] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039a60) on tqpair=0x1fd1d10 00:19:49.461 [2024-04-27 00:53:41.946421] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:19:49.461 [2024-04-27 00:53:41.946431] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:19:49.461 [2024-04-27 00:53:41.946439] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946442] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946445] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd1d10) 00:19:49.461 [2024-04-27 00:53:41.946453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.461 [2024-04-27 00:53:41.946466] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039a60, cid 0, qid 0 00:19:49.461 [2024-04-27 00:53:41.946636] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.461 [2024-04-27 00:53:41.946646] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.461 [2024-04-27 00:53:41.946649] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946652] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039a60) on tqpair=0x1fd1d10 00:19:49.461 [2024-04-27 00:53:41.946658] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:19:49.461 [2024-04-27 00:53:41.946667] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:19:49.461 [2024-04-27 00:53:41.946674] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946678] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946681] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd1d10) 00:19:49.461 [2024-04-27 00:53:41.946688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.461 [2024-04-27 00:53:41.946700] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039a60, cid 0, qid 0 00:19:49.461 [2024-04-27 00:53:41.946833] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.461 [2024-04-27 00:53:41.946842] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.461 [2024-04-27 00:53:41.946845] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946848] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039a60) on tqpair=0x1fd1d10 00:19:49.461 [2024-04-27 00:53:41.946854] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:49.461 [2024-04-27 00:53:41.946866] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946869] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.946872] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd1d10) 00:19:49.461 [2024-04-27 00:53:41.946879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.461 [2024-04-27 00:53:41.946890] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039a60, cid 0, qid 0 00:19:49.461 [2024-04-27 00:53:41.947066] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.461 [2024-04-27 00:53:41.947083] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.461 [2024-04-27 00:53:41.947087] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.947090] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039a60) on tqpair=0x1fd1d10 00:19:49.461 [2024-04-27 00:53:41.947095] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:19:49.461 [2024-04-27 00:53:41.947103] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:19:49.461 [2024-04-27 00:53:41.947111] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:49.461 [2024-04-27 00:53:41.947216] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:19:49.461 [2024-04-27 00:53:41.947220] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:49.461 [2024-04-27 00:53:41.947228] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.947232] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.947235] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd1d10) 00:19:49.461 [2024-04-27 00:53:41.947242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.461 [2024-04-27 00:53:41.947254] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039a60, cid 0, qid 0 00:19:49.461 [2024-04-27 00:53:41.947391] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.461 [2024-04-27 00:53:41.947401] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.461 [2024-04-27 00:53:41.947404] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.947408] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039a60) on tqpair=0x1fd1d10 00:19:49.461 [2024-04-27 00:53:41.947413] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:49.461 [2024-04-27 00:53:41.947424] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.947428] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.947431] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd1d10) 00:19:49.461 [2024-04-27 00:53:41.947438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.461 [2024-04-27 00:53:41.947450] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039a60, cid 0, qid 0 00:19:49.461 [2024-04-27 00:53:41.947592] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.461 [2024-04-27 00:53:41.947601] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.461 [2024-04-27 00:53:41.947604] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.947608] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039a60) on tqpair=0x1fd1d10 00:19:49.461 [2024-04-27 00:53:41.947613] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:49.461 [2024-04-27 00:53:41.947617] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:19:49.461 [2024-04-27 00:53:41.947625] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:19:49.461 [2024-04-27 00:53:41.947638] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:19:49.461 [2024-04-27 00:53:41.947648] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.461 [2024-04-27 00:53:41.947651] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd1d10) 00:19:49.461 [2024-04-27 00:53:41.947658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.461 [2024-04-27 00:53:41.947671] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039a60, cid 0, qid 0 00:19:49.461 [2024-04-27 00:53:41.947843] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.461 [2024-04-27 00:53:41.947852] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.461 [2024-04-27 00:53:41.947855] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.947858] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd1d10): datao=0, datal=4096, cccid=0 00:19:49.462 [2024-04-27 00:53:41.947863] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039a60) on tqpair(0x1fd1d10): expected_datao=0, payload_size=4096 00:19:49.462 [2024-04-27 00:53:41.947866] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948101] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948105] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948243] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.462 [2024-04-27 00:53:41.948253] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.462 [2024-04-27 00:53:41.948256] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948259] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039a60) on tqpair=0x1fd1d10 00:19:49.462 [2024-04-27 00:53:41.948267] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:19:49.462 [2024-04-27 00:53:41.948271] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:19:49.462 [2024-04-27 00:53:41.948276] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:19:49.462 [2024-04-27 00:53:41.948279] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:19:49.462 [2024-04-27 00:53:41.948283] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:19:49.462 [2024-04-27 00:53:41.948287] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.948296] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.948304] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948307] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948310] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd1d10) 00:19:49.462 [2024-04-27 00:53:41.948317] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.462 [2024-04-27 00:53:41.948330] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039a60, cid 0, qid 0 00:19:49.462 [2024-04-27 00:53:41.948469] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.462 [2024-04-27 00:53:41.948478] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.462 [2024-04-27 00:53:41.948482] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948485] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039a60) on tqpair=0x1fd1d10 00:19:49.462 [2024-04-27 00:53:41.948493] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948496] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948500] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd1d10) 00:19:49.462 [2024-04-27 00:53:41.948506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.462 [2024-04-27 00:53:41.948512] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948515] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948518] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fd1d10) 00:19:49.462 [2024-04-27 00:53:41.948526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.462 [2024-04-27 00:53:41.948531] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948534] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948537] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fd1d10) 00:19:49.462 [2024-04-27 00:53:41.948542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.462 [2024-04-27 00:53:41.948547] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948550] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948553] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd1d10) 00:19:49.462 [2024-04-27 00:53:41.948558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.462 [2024-04-27 00:53:41.948563] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.948574] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.948581] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948584] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd1d10) 00:19:49.462 [2024-04-27 00:53:41.948589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.462 [2024-04-27 00:53:41.948603] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039a60, cid 0, qid 0 00:19:49.462 [2024-04-27 00:53:41.948607] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039bc0, cid 1, qid 0 00:19:49.462 [2024-04-27 00:53:41.948611] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039d20, cid 2, qid 0 00:19:49.462 [2024-04-27 00:53:41.948615] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039e80, cid 3, qid 0 00:19:49.462 [2024-04-27 00:53:41.948619] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039fe0, cid 4, qid 0 00:19:49.462 [2024-04-27 00:53:41.948806] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.462 [2024-04-27 00:53:41.948816] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.462 [2024-04-27 00:53:41.948819] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948822] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039fe0) on tqpair=0x1fd1d10 00:19:49.462 [2024-04-27 00:53:41.948828] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:19:49.462 [2024-04-27 00:53:41.948832] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.948845] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.948851] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.948857] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948861] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.948864] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd1d10) 00:19:49.462 [2024-04-27 00:53:41.948870] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:49.462 [2024-04-27 00:53:41.948884] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039fe0, cid 4, qid 0 00:19:49.462 [2024-04-27 00:53:41.949036] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.462 [2024-04-27 00:53:41.949045] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.462 [2024-04-27 00:53:41.949048] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.949052] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039fe0) on tqpair=0x1fd1d10 00:19:49.462 [2024-04-27 00:53:41.949101] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.949112] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.949121] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.949124] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd1d10) 00:19:49.462 [2024-04-27 00:53:41.949130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.462 [2024-04-27 00:53:41.949143] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039fe0, cid 4, qid 0 00:19:49.462 [2024-04-27 00:53:41.949291] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.462 [2024-04-27 00:53:41.949300] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.462 [2024-04-27 00:53:41.949303] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.949307] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd1d10): datao=0, datal=4096, cccid=4 00:19:49.462 [2024-04-27 00:53:41.949311] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039fe0) on tqpair(0x1fd1d10): expected_datao=0, payload_size=4096 00:19:49.462 [2024-04-27 00:53:41.949314] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.949525] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.949529] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.993076] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.462 [2024-04-27 00:53:41.993084] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.462 [2024-04-27 00:53:41.993087] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.993090] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039fe0) on tqpair=0x1fd1d10 00:19:49.462 [2024-04-27 00:53:41.993102] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:19:49.462 [2024-04-27 00:53:41.993112] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.993120] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:19:49.462 [2024-04-27 00:53:41.993128] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.993131] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd1d10) 00:19:49.462 [2024-04-27 00:53:41.993138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.462 [2024-04-27 00:53:41.993151] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039fe0, cid 4, qid 0 00:19:49.462 [2024-04-27 00:53:41.993391] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.462 [2024-04-27 00:53:41.993401] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.462 [2024-04-27 00:53:41.993404] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.993407] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd1d10): datao=0, datal=4096, cccid=4 00:19:49.462 [2024-04-27 00:53:41.993414] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039fe0) on tqpair(0x1fd1d10): expected_datao=0, payload_size=4096 00:19:49.462 [2024-04-27 00:53:41.993418] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.462 [2024-04-27 00:53:41.993632] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:41.993636] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.034288] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.463 [2024-04-27 00:53:42.034302] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.463 [2024-04-27 00:53:42.034306] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.034310] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039fe0) on tqpair=0x1fd1d10 00:19:49.463 [2024-04-27 00:53:42.034327] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:49.463 [2024-04-27 00:53:42.034337] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:49.463 [2024-04-27 00:53:42.034346] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.034349] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd1d10) 00:19:49.463 [2024-04-27 00:53:42.034356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.463 [2024-04-27 00:53:42.034370] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039fe0, cid 4, qid 0 00:19:49.463 [2024-04-27 00:53:42.034511] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.463 [2024-04-27 00:53:42.034521] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.463 [2024-04-27 00:53:42.034524] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.034528] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd1d10): datao=0, datal=4096, cccid=4 00:19:49.463 [2024-04-27 00:53:42.034532] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039fe0) on tqpair(0x1fd1d10): expected_datao=0, payload_size=4096 00:19:49.463 [2024-04-27 00:53:42.034536] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.034756] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.034760] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079081] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.463 [2024-04-27 00:53:42.079091] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.463 [2024-04-27 00:53:42.079095] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079098] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039fe0) on tqpair=0x1fd1d10 00:19:49.463 [2024-04-27 00:53:42.079108] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:49.463 [2024-04-27 00:53:42.079116] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:19:49.463 [2024-04-27 00:53:42.079128] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:19:49.463 [2024-04-27 00:53:42.079134] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:49.463 [2024-04-27 00:53:42.079138] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:19:49.463 [2024-04-27 00:53:42.079143] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:19:49.463 [2024-04-27 00:53:42.079147] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:19:49.463 [2024-04-27 00:53:42.079154] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:19:49.463 [2024-04-27 00:53:42.079168] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079171] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd1d10) 00:19:49.463 [2024-04-27 00:53:42.079179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.463 [2024-04-27 00:53:42.079184] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079188] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079191] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd1d10) 00:19:49.463 [2024-04-27 00:53:42.079196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.463 [2024-04-27 00:53:42.079211] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039fe0, cid 4, qid 0 00:19:49.463 [2024-04-27 00:53:42.079215] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203a140, cid 5, qid 0 00:19:49.463 [2024-04-27 00:53:42.079404] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.463 [2024-04-27 00:53:42.079415] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.463 [2024-04-27 00:53:42.079418] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079421] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039fe0) on tqpair=0x1fd1d10 00:19:49.463 [2024-04-27 00:53:42.079428] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.463 [2024-04-27 00:53:42.079433] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.463 [2024-04-27 00:53:42.079436] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079440] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x203a140) on tqpair=0x1fd1d10 00:19:49.463 [2024-04-27 00:53:42.079451] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079455] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd1d10) 00:19:49.463 [2024-04-27 00:53:42.079461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.463 [2024-04-27 00:53:42.079473] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203a140, cid 5, qid 0 00:19:49.463 [2024-04-27 00:53:42.079614] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.463 [2024-04-27 00:53:42.079623] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.463 [2024-04-27 00:53:42.079626] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079629] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x203a140) on tqpair=0x1fd1d10 00:19:49.463 [2024-04-27 00:53:42.079640] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079644] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd1d10) 00:19:49.463 [2024-04-27 00:53:42.079650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.463 [2024-04-27 00:53:42.079662] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203a140, cid 5, qid 0 00:19:49.463 [2024-04-27 00:53:42.079844] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.463 [2024-04-27 00:53:42.079854] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.463 [2024-04-27 00:53:42.079858] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x203a140) on tqpair=0x1fd1d10 00:19:49.463 [2024-04-27 00:53:42.079873] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.079900] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd1d10) 00:19:49.463 [2024-04-27 00:53:42.079907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.463 [2024-04-27 00:53:42.079919] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203a140, cid 5, qid 0 00:19:49.463 [2024-04-27 00:53:42.080082] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.463 [2024-04-27 00:53:42.080092] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.463 [2024-04-27 00:53:42.080096] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.080100] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x203a140) on tqpair=0x1fd1d10 00:19:49.463 [2024-04-27 00:53:42.080115] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.080119] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd1d10) 00:19:49.463 [2024-04-27 00:53:42.080126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.463 [2024-04-27 00:53:42.080132] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.080137] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd1d10) 00:19:49.463 [2024-04-27 00:53:42.080143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.463 [2024-04-27 00:53:42.080149] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.080152] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1fd1d10) 00:19:49.463 [2024-04-27 00:53:42.080158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.463 [2024-04-27 00:53:42.080164] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.080167] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fd1d10) 00:19:49.463 [2024-04-27 00:53:42.080172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.463 [2024-04-27 00:53:42.080186] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203a140, cid 5, qid 0 00:19:49.463 [2024-04-27 00:53:42.080191] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039fe0, cid 4, qid 0 00:19:49.463 [2024-04-27 00:53:42.080195] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203a2a0, cid 6, qid 0 00:19:49.463 [2024-04-27 00:53:42.080199] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203a400, cid 7, qid 0 00:19:49.463 [2024-04-27 00:53:42.080406] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.463 [2024-04-27 00:53:42.080417] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.463 [2024-04-27 00:53:42.080420] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.080423] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd1d10): datao=0, datal=8192, cccid=5 00:19:49.463 [2024-04-27 00:53:42.080427] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203a140) on tqpair(0x1fd1d10): expected_datao=0, payload_size=8192 00:19:49.463 [2024-04-27 00:53:42.080431] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.080906] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.080910] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.080915] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.463 [2024-04-27 00:53:42.080919] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.463 [2024-04-27 00:53:42.080923] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.463 [2024-04-27 00:53:42.080929] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd1d10): datao=0, datal=512, cccid=4 00:19:49.464 [2024-04-27 00:53:42.080933] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2039fe0) on tqpair(0x1fd1d10): expected_datao=0, payload_size=512 00:19:49.464 [2024-04-27 00:53:42.080937] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.080942] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.080945] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.080950] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.464 [2024-04-27 00:53:42.080955] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.464 [2024-04-27 00:53:42.080958] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.080961] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd1d10): datao=0, datal=512, cccid=6 00:19:49.464 [2024-04-27 00:53:42.080965] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203a2a0) on tqpair(0x1fd1d10): expected_datao=0, payload_size=512 00:19:49.464 [2024-04-27 00:53:42.080968] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.080974] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.080977] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.080981] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:19:49.464 [2024-04-27 00:53:42.080986] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:19:49.464 [2024-04-27 00:53:42.080989] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.080992] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd1d10): datao=0, datal=4096, cccid=7 00:19:49.464 [2024-04-27 00:53:42.080996] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x203a400) on tqpair(0x1fd1d10): expected_datao=0, payload_size=4096 00:19:49.464 [2024-04-27 00:53:42.081000] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.081005] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.081008] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.081184] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.464 [2024-04-27 00:53:42.081189] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.464 [2024-04-27 00:53:42.081192] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.081196] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x203a140) on tqpair=0x1fd1d10 00:19:49.464 [2024-04-27 00:53:42.081209] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.464 [2024-04-27 00:53:42.081214] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.464 [2024-04-27 00:53:42.081217] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.081220] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039fe0) on tqpair=0x1fd1d10 00:19:49.464 [2024-04-27 00:53:42.081228] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.464 [2024-04-27 00:53:42.081233] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.464 [2024-04-27 00:53:42.081236] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.081240] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x203a2a0) on tqpair=0x1fd1d10 00:19:49.464 [2024-04-27 00:53:42.081246] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.464 [2024-04-27 00:53:42.081251] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.464 [2024-04-27 00:53:42.081254] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.464 [2024-04-27 00:53:42.081257] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x203a400) on tqpair=0x1fd1d10 00:19:49.464 ===================================================== 00:19:49.464 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.464 ===================================================== 00:19:49.464 Controller Capabilities/Features 00:19:49.464 ================================ 00:19:49.464 Vendor ID: 8086 00:19:49.464 Subsystem Vendor ID: 8086 00:19:49.464 Serial Number: SPDK00000000000001 00:19:49.464 Model Number: SPDK bdev Controller 00:19:49.464 Firmware Version: 24.05 00:19:49.464 Recommended Arb Burst: 6 00:19:49.464 IEEE OUI Identifier: e4 d2 5c 00:19:49.464 Multi-path I/O 00:19:49.464 May have multiple subsystem ports: Yes 00:19:49.464 May have multiple controllers: Yes 00:19:49.464 Associated with SR-IOV VF: No 00:19:49.464 Max Data Transfer Size: 131072 00:19:49.464 Max Number of Namespaces: 32 00:19:49.464 Max Number of I/O Queues: 127 00:19:49.464 NVMe Specification Version (VS): 1.3 00:19:49.464 NVMe Specification Version (Identify): 1.3 00:19:49.464 Maximum Queue Entries: 128 00:19:49.464 Contiguous Queues Required: Yes 00:19:49.464 Arbitration Mechanisms Supported 00:19:49.464 Weighted Round Robin: Not Supported 00:19:49.464 Vendor Specific: Not Supported 00:19:49.464 Reset Timeout: 15000 ms 00:19:49.464 Doorbell Stride: 4 bytes 00:19:49.464 NVM Subsystem Reset: Not Supported 00:19:49.464 Command Sets Supported 00:19:49.464 NVM Command Set: Supported 00:19:49.464 Boot Partition: Not Supported 00:19:49.464 Memory Page Size Minimum: 4096 bytes 00:19:49.464 Memory Page Size Maximum: 4096 bytes 00:19:49.464 Persistent Memory Region: Not Supported 00:19:49.464 Optional Asynchronous Events Supported 00:19:49.464 Namespace Attribute Notices: Supported 00:19:49.464 Firmware Activation Notices: Not Supported 00:19:49.464 ANA Change Notices: Not Supported 00:19:49.464 PLE Aggregate Log Change Notices: Not Supported 00:19:49.464 LBA Status Info Alert Notices: Not Supported 00:19:49.464 EGE Aggregate Log Change Notices: Not Supported 00:19:49.464 Normal NVM Subsystem Shutdown event: Not Supported 00:19:49.464 Zone Descriptor Change Notices: Not Supported 00:19:49.464 Discovery Log Change Notices: Not Supported 00:19:49.464 Controller Attributes 00:19:49.464 128-bit Host Identifier: Supported 00:19:49.464 Non-Operational Permissive Mode: Not Supported 00:19:49.464 NVM Sets: Not Supported 00:19:49.464 Read Recovery Levels: Not Supported 00:19:49.464 Endurance Groups: Not Supported 00:19:49.464 Predictable Latency Mode: Not Supported 00:19:49.464 Traffic Based Keep ALive: Not Supported 00:19:49.464 Namespace Granularity: Not Supported 00:19:49.464 SQ Associations: Not Supported 00:19:49.464 UUID List: Not Supported 00:19:49.464 Multi-Domain Subsystem: Not Supported 00:19:49.464 Fixed Capacity Management: Not Supported 00:19:49.464 Variable Capacity Management: Not Supported 00:19:49.464 Delete Endurance Group: Not Supported 00:19:49.464 Delete NVM Set: Not Supported 00:19:49.464 Extended LBA Formats Supported: Not Supported 00:19:49.464 Flexible Data Placement Supported: Not Supported 00:19:49.464 00:19:49.464 Controller Memory Buffer Support 00:19:49.464 ================================ 00:19:49.464 Supported: No 00:19:49.464 00:19:49.464 Persistent Memory Region Support 00:19:49.464 ================================ 00:19:49.464 Supported: No 00:19:49.464 00:19:49.464 Admin Command Set Attributes 00:19:49.464 ============================ 00:19:49.464 Security Send/Receive: Not Supported 00:19:49.464 Format NVM: Not Supported 00:19:49.464 Firmware Activate/Download: Not Supported 00:19:49.464 Namespace Management: Not Supported 00:19:49.464 Device Self-Test: Not Supported 00:19:49.464 Directives: Not Supported 00:19:49.464 NVMe-MI: Not Supported 00:19:49.464 Virtualization Management: Not Supported 00:19:49.464 Doorbell Buffer Config: Not Supported 00:19:49.464 Get LBA Status Capability: Not Supported 00:19:49.464 Command & Feature Lockdown Capability: Not Supported 00:19:49.464 Abort Command Limit: 4 00:19:49.464 Async Event Request Limit: 4 00:19:49.464 Number of Firmware Slots: N/A 00:19:49.464 Firmware Slot 1 Read-Only: N/A 00:19:49.464 Firmware Activation Without Reset: N/A 00:19:49.464 Multiple Update Detection Support: N/A 00:19:49.464 Firmware Update Granularity: No Information Provided 00:19:49.464 Per-Namespace SMART Log: No 00:19:49.464 Asymmetric Namespace Access Log Page: Not Supported 00:19:49.464 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:19:49.464 Command Effects Log Page: Supported 00:19:49.464 Get Log Page Extended Data: Supported 00:19:49.464 Telemetry Log Pages: Not Supported 00:19:49.464 Persistent Event Log Pages: Not Supported 00:19:49.464 Supported Log Pages Log Page: May Support 00:19:49.464 Commands Supported & Effects Log Page: Not Supported 00:19:49.464 Feature Identifiers & Effects Log Page:May Support 00:19:49.464 NVMe-MI Commands & Effects Log Page: May Support 00:19:49.464 Data Area 4 for Telemetry Log: Not Supported 00:19:49.464 Error Log Page Entries Supported: 128 00:19:49.464 Keep Alive: Supported 00:19:49.464 Keep Alive Granularity: 10000 ms 00:19:49.464 00:19:49.464 NVM Command Set Attributes 00:19:49.464 ========================== 00:19:49.464 Submission Queue Entry Size 00:19:49.464 Max: 64 00:19:49.464 Min: 64 00:19:49.464 Completion Queue Entry Size 00:19:49.464 Max: 16 00:19:49.464 Min: 16 00:19:49.464 Number of Namespaces: 32 00:19:49.464 Compare Command: Supported 00:19:49.464 Write Uncorrectable Command: Not Supported 00:19:49.464 Dataset Management Command: Supported 00:19:49.464 Write Zeroes Command: Supported 00:19:49.464 Set Features Save Field: Not Supported 00:19:49.464 Reservations: Supported 00:19:49.464 Timestamp: Not Supported 00:19:49.464 Copy: Supported 00:19:49.464 Volatile Write Cache: Present 00:19:49.464 Atomic Write Unit (Normal): 1 00:19:49.464 Atomic Write Unit (PFail): 1 00:19:49.464 Atomic Compare & Write Unit: 1 00:19:49.464 Fused Compare & Write: Supported 00:19:49.464 Scatter-Gather List 00:19:49.464 SGL Command Set: Supported 00:19:49.464 SGL Keyed: Supported 00:19:49.464 SGL Bit Bucket Descriptor: Not Supported 00:19:49.465 SGL Metadata Pointer: Not Supported 00:19:49.465 Oversized SGL: Not Supported 00:19:49.465 SGL Metadata Address: Not Supported 00:19:49.465 SGL Offset: Supported 00:19:49.465 Transport SGL Data Block: Not Supported 00:19:49.465 Replay Protected Memory Block: Not Supported 00:19:49.465 00:19:49.465 Firmware Slot Information 00:19:49.465 ========================= 00:19:49.465 Active slot: 1 00:19:49.465 Slot 1 Firmware Revision: 24.05 00:19:49.465 00:19:49.465 00:19:49.465 Commands Supported and Effects 00:19:49.465 ============================== 00:19:49.465 Admin Commands 00:19:49.465 -------------- 00:19:49.465 Get Log Page (02h): Supported 00:19:49.465 Identify (06h): Supported 00:19:49.465 Abort (08h): Supported 00:19:49.465 Set Features (09h): Supported 00:19:49.465 Get Features (0Ah): Supported 00:19:49.465 Asynchronous Event Request (0Ch): Supported 00:19:49.465 Keep Alive (18h): Supported 00:19:49.465 I/O Commands 00:19:49.465 ------------ 00:19:49.465 Flush (00h): Supported LBA-Change 00:19:49.465 Write (01h): Supported LBA-Change 00:19:49.465 Read (02h): Supported 00:19:49.465 Compare (05h): Supported 00:19:49.465 Write Zeroes (08h): Supported LBA-Change 00:19:49.465 Dataset Management (09h): Supported LBA-Change 00:19:49.465 Copy (19h): Supported LBA-Change 00:19:49.465 Unknown (79h): Supported LBA-Change 00:19:49.465 Unknown (7Ah): Supported 00:19:49.465 00:19:49.465 Error Log 00:19:49.465 ========= 00:19:49.465 00:19:49.465 Arbitration 00:19:49.465 =========== 00:19:49.465 Arbitration Burst: 1 00:19:49.465 00:19:49.465 Power Management 00:19:49.465 ================ 00:19:49.465 Number of Power States: 1 00:19:49.465 Current Power State: Power State #0 00:19:49.465 Power State #0: 00:19:49.465 Max Power: 0.00 W 00:19:49.465 Non-Operational State: Operational 00:19:49.465 Entry Latency: Not Reported 00:19:49.465 Exit Latency: Not Reported 00:19:49.465 Relative Read Throughput: 0 00:19:49.465 Relative Read Latency: 0 00:19:49.465 Relative Write Throughput: 0 00:19:49.465 Relative Write Latency: 0 00:19:49.465 Idle Power: Not Reported 00:19:49.465 Active Power: Not Reported 00:19:49.465 Non-Operational Permissive Mode: Not Supported 00:19:49.465 00:19:49.465 Health Information 00:19:49.465 ================== 00:19:49.465 Critical Warnings: 00:19:49.465 Available Spare Space: OK 00:19:49.465 Temperature: OK 00:19:49.465 Device Reliability: OK 00:19:49.465 Read Only: No 00:19:49.465 Volatile Memory Backup: OK 00:19:49.465 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:49.465 Temperature Threshold: [2024-04-27 00:53:42.081352] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.081357] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1fd1d10) 00:19:49.465 [2024-04-27 00:53:42.081364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.465 [2024-04-27 00:53:42.081377] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x203a400, cid 7, qid 0 00:19:49.465 [2024-04-27 00:53:42.081564] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.465 [2024-04-27 00:53:42.081573] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.465 [2024-04-27 00:53:42.081576] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.081580] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x203a400) on tqpair=0x1fd1d10 00:19:49.465 [2024-04-27 00:53:42.081610] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:19:49.465 [2024-04-27 00:53:42.081623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.465 [2024-04-27 00:53:42.081629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.465 [2024-04-27 00:53:42.081634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.465 [2024-04-27 00:53:42.081639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:49.465 [2024-04-27 00:53:42.081647] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.081651] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.081654] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd1d10) 00:19:49.465 [2024-04-27 00:53:42.081660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.465 [2024-04-27 00:53:42.081673] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039e80, cid 3, qid 0 00:19:49.465 [2024-04-27 00:53:42.081842] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.465 [2024-04-27 00:53:42.081851] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.465 [2024-04-27 00:53:42.081854] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.081858] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039e80) on tqpair=0x1fd1d10 00:19:49.465 [2024-04-27 00:53:42.081866] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.081869] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.081872] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd1d10) 00:19:49.465 [2024-04-27 00:53:42.081879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.465 [2024-04-27 00:53:42.081894] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039e80, cid 3, qid 0 00:19:49.465 [2024-04-27 00:53:42.082044] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.465 [2024-04-27 00:53:42.082053] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.465 [2024-04-27 00:53:42.082056] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.082059] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039e80) on tqpair=0x1fd1d10 00:19:49.465 [2024-04-27 00:53:42.082064] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:19:49.465 [2024-04-27 00:53:42.082068] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:19:49.465 [2024-04-27 00:53:42.082088] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.082092] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.082098] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd1d10) 00:19:49.465 [2024-04-27 00:53:42.082104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.465 [2024-04-27 00:53:42.082117] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039e80, cid 3, qid 0 00:19:49.465 [2024-04-27 00:53:42.082270] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.465 [2024-04-27 00:53:42.082280] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.465 [2024-04-27 00:53:42.082283] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.082286] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039e80) on tqpair=0x1fd1d10 00:19:49.465 [2024-04-27 00:53:42.082298] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.082302] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.465 [2024-04-27 00:53:42.082305] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd1d10) 00:19:49.465 [2024-04-27 00:53:42.082311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.465 [2024-04-27 00:53:42.082323] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039e80, cid 3, qid 0 00:19:49.465 [2024-04-27 00:53:42.082501] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.465 [2024-04-27 00:53:42.082510] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.466 [2024-04-27 00:53:42.082513] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.082517] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039e80) on tqpair=0x1fd1d10 00:19:49.466 [2024-04-27 00:53:42.082527] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.082531] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.082534] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd1d10) 00:19:49.466 [2024-04-27 00:53:42.082541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.466 [2024-04-27 00:53:42.082552] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039e80, cid 3, qid 0 00:19:49.466 [2024-04-27 00:53:42.082700] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.466 [2024-04-27 00:53:42.082709] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.466 [2024-04-27 00:53:42.082713] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.082716] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039e80) on tqpair=0x1fd1d10 00:19:49.466 [2024-04-27 00:53:42.082728] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.082731] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.082735] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd1d10) 00:19:49.466 [2024-04-27 00:53:42.082741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.466 [2024-04-27 00:53:42.082753] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039e80, cid 3, qid 0 00:19:49.466 [2024-04-27 00:53:42.082888] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.466 [2024-04-27 00:53:42.082897] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.466 [2024-04-27 00:53:42.082900] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.082904] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039e80) on tqpair=0x1fd1d10 00:19:49.466 [2024-04-27 00:53:42.082916] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.082920] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.082923] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd1d10) 00:19:49.466 [2024-04-27 00:53:42.082932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.466 [2024-04-27 00:53:42.082944] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039e80, cid 3, qid 0 00:19:49.466 [2024-04-27 00:53:42.087079] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.466 [2024-04-27 00:53:42.087091] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.466 [2024-04-27 00:53:42.087095] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.087098] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039e80) on tqpair=0x1fd1d10 00:19:49.466 [2024-04-27 00:53:42.087110] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.087114] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.087117] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd1d10) 00:19:49.466 [2024-04-27 00:53:42.087124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:49.466 [2024-04-27 00:53:42.087137] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2039e80, cid 3, qid 0 00:19:49.466 [2024-04-27 00:53:42.087392] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:19:49.466 [2024-04-27 00:53:42.087402] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:19:49.466 [2024-04-27 00:53:42.087405] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:19:49.466 [2024-04-27 00:53:42.087408] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2039e80) on tqpair=0x1fd1d10 00:19:49.466 [2024-04-27 00:53:42.087418] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:19:49.466 0 Kelvin (-273 Celsius) 00:19:49.466 Available Spare: 0% 00:19:49.466 Available Spare Threshold: 0% 00:19:49.466 Life Percentage Used: 0% 00:19:49.466 Data Units Read: 0 00:19:49.466 Data Units Written: 0 00:19:49.466 Host Read Commands: 0 00:19:49.466 Host Write Commands: 0 00:19:49.466 Controller Busy Time: 0 minutes 00:19:49.466 Power Cycles: 0 00:19:49.466 Power On Hours: 0 hours 00:19:49.466 Unsafe Shutdowns: 0 00:19:49.466 Unrecoverable Media Errors: 0 00:19:49.466 Lifetime Error Log Entries: 0 00:19:49.466 Warning Temperature Time: 0 minutes 00:19:49.466 Critical Temperature Time: 0 minutes 00:19:49.466 00:19:49.466 Number of Queues 00:19:49.466 ================ 00:19:49.466 Number of I/O Submission Queues: 127 00:19:49.466 Number of I/O Completion Queues: 127 00:19:49.466 00:19:49.466 Active Namespaces 00:19:49.466 ================= 00:19:49.466 Namespace ID:1 00:19:49.466 Error Recovery Timeout: Unlimited 00:19:49.466 Command Set Identifier: NVM (00h) 00:19:49.466 Deallocate: Supported 00:19:49.466 Deallocated/Unwritten Error: Not Supported 00:19:49.466 Deallocated Read Value: Unknown 00:19:49.466 Deallocate in Write Zeroes: Not Supported 00:19:49.466 Deallocated Guard Field: 0xFFFF 00:19:49.466 Flush: Supported 00:19:49.466 Reservation: Supported 00:19:49.466 Namespace Sharing Capabilities: Multiple Controllers 00:19:49.466 Size (in LBAs): 131072 (0GiB) 00:19:49.466 Capacity (in LBAs): 131072 (0GiB) 00:19:49.466 Utilization (in LBAs): 131072 (0GiB) 00:19:49.466 NGUID: ABCDEF0123456789ABCDEF0123456789 00:19:49.466 EUI64: ABCDEF0123456789 00:19:49.466 UUID: 0f24e31d-d9fe-4526-92a2-6994043bcd2e 00:19:49.466 Thin Provisioning: Not Supported 00:19:49.466 Per-NS Atomic Units: Yes 00:19:49.466 Atomic Boundary Size (Normal): 0 00:19:49.466 Atomic Boundary Size (PFail): 0 00:19:49.466 Atomic Boundary Offset: 0 00:19:49.466 Maximum Single Source Range Length: 65535 00:19:49.466 Maximum Copy Length: 65535 00:19:49.466 Maximum Source Range Count: 1 00:19:49.466 NGUID/EUI64 Never Reused: No 00:19:49.466 Namespace Write Protected: No 00:19:49.466 Number of LBA Formats: 1 00:19:49.466 Current LBA Format: LBA Format #00 00:19:49.466 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:49.466 00:19:49.466 00:53:42 -- host/identify.sh@51 -- # sync 00:19:49.466 00:53:42 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:49.466 00:53:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.466 00:53:42 -- common/autotest_common.sh@10 -- # set +x 00:19:49.466 00:53:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.466 00:53:42 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:19:49.466 00:53:42 -- host/identify.sh@56 -- # nvmftestfini 00:19:49.466 00:53:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:49.466 00:53:42 -- nvmf/common.sh@117 -- # sync 00:19:49.466 00:53:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:49.466 00:53:42 -- nvmf/common.sh@120 -- # set +e 00:19:49.466 00:53:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:49.466 00:53:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:49.466 rmmod nvme_tcp 00:19:49.466 rmmod nvme_fabrics 00:19:49.727 rmmod nvme_keyring 00:19:49.727 00:53:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:49.727 00:53:42 -- nvmf/common.sh@124 -- # set -e 00:19:49.727 00:53:42 -- nvmf/common.sh@125 -- # return 0 00:19:49.727 00:53:42 -- nvmf/common.sh@478 -- # '[' -n 1745763 ']' 00:19:49.727 00:53:42 -- nvmf/common.sh@479 -- # killprocess 1745763 00:19:49.727 00:53:42 -- common/autotest_common.sh@936 -- # '[' -z 1745763 ']' 00:19:49.727 00:53:42 -- common/autotest_common.sh@940 -- # kill -0 1745763 00:19:49.727 00:53:42 -- common/autotest_common.sh@941 -- # uname 00:19:49.727 00:53:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:49.727 00:53:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1745763 00:19:49.727 00:53:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:49.727 00:53:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:49.727 00:53:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1745763' 00:19:49.727 killing process with pid 1745763 00:19:49.727 00:53:42 -- common/autotest_common.sh@955 -- # kill 1745763 00:19:49.727 [2024-04-27 00:53:42.231312] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:49.727 00:53:42 -- common/autotest_common.sh@960 -- # wait 1745763 00:19:49.988 00:53:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:49.988 00:53:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:49.988 00:53:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:49.988 00:53:42 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:49.988 00:53:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:49.988 00:53:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.988 00:53:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.988 00:53:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.896 00:53:44 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:51.896 00:19:51.896 real 0m9.437s 00:19:51.896 user 0m7.663s 00:19:51.896 sys 0m4.553s 00:19:51.896 00:53:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:51.896 00:53:44 -- common/autotest_common.sh@10 -- # set +x 00:19:51.896 ************************************ 00:19:51.896 END TEST nvmf_identify 00:19:51.896 ************************************ 00:19:51.896 00:53:44 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:51.897 00:53:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:51.897 00:53:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:51.897 00:53:44 -- common/autotest_common.sh@10 -- # set +x 00:19:52.156 ************************************ 00:19:52.156 START TEST nvmf_perf 00:19:52.156 ************************************ 00:19:52.156 00:53:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:19:52.156 * Looking for test storage... 00:19:52.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:52.157 00:53:44 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.157 00:53:44 -- nvmf/common.sh@7 -- # uname -s 00:19:52.157 00:53:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.157 00:53:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.157 00:53:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.157 00:53:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.157 00:53:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.157 00:53:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.157 00:53:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.157 00:53:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.157 00:53:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.157 00:53:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.157 00:53:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:52.157 00:53:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:52.157 00:53:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.157 00:53:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.157 00:53:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.157 00:53:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:52.157 00:53:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:52.157 00:53:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.157 00:53:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.157 00:53:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.157 00:53:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.157 00:53:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.157 00:53:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.157 00:53:44 -- paths/export.sh@5 -- # export PATH 00:19:52.157 00:53:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.157 00:53:44 -- nvmf/common.sh@47 -- # : 0 00:19:52.157 00:53:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:52.157 00:53:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:52.157 00:53:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:52.157 00:53:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.157 00:53:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.157 00:53:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:52.157 00:53:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:52.157 00:53:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:52.157 00:53:44 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:52.157 00:53:44 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:52.157 00:53:44 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:52.157 00:53:44 -- host/perf.sh@17 -- # nvmftestinit 00:19:52.157 00:53:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:52.157 00:53:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.157 00:53:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:52.157 00:53:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:52.157 00:53:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:52.157 00:53:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.157 00:53:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.157 00:53:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.157 00:53:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:52.157 00:53:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:52.157 00:53:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.157 00:53:44 -- common/autotest_common.sh@10 -- # set +x 00:19:57.434 00:53:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:57.434 00:53:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:57.434 00:53:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:57.434 00:53:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:57.434 00:53:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:57.434 00:53:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:57.434 00:53:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:57.434 00:53:49 -- nvmf/common.sh@295 -- # net_devs=() 00:19:57.434 00:53:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:57.434 00:53:49 -- nvmf/common.sh@296 -- # e810=() 00:19:57.434 00:53:49 -- nvmf/common.sh@296 -- # local -ga e810 00:19:57.434 00:53:49 -- nvmf/common.sh@297 -- # x722=() 00:19:57.434 00:53:49 -- nvmf/common.sh@297 -- # local -ga x722 00:19:57.434 00:53:49 -- nvmf/common.sh@298 -- # mlx=() 00:19:57.434 00:53:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:57.434 00:53:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.434 00:53:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:57.434 00:53:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:57.434 00:53:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:57.434 00:53:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.434 00:53:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:57.434 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:57.434 00:53:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:57.434 00:53:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:57.434 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:57.434 00:53:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:57.434 00:53:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:57.434 00:53:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:57.435 00:53:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:57.435 00:53:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.435 00:53:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.435 00:53:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:57.435 00:53:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.435 00:53:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:57.435 Found net devices under 0000:86:00.0: cvl_0_0 00:19:57.435 00:53:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.435 00:53:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:57.435 00:53:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.435 00:53:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:57.435 00:53:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.435 00:53:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:57.435 Found net devices under 0000:86:00.1: cvl_0_1 00:19:57.435 00:53:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.435 00:53:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:57.435 00:53:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:57.435 00:53:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:57.435 00:53:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:57.435 00:53:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:57.435 00:53:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:57.435 00:53:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:57.435 00:53:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:57.435 00:53:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:57.435 00:53:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:57.435 00:53:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:57.435 00:53:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:57.435 00:53:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:57.435 00:53:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:57.435 00:53:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:57.435 00:53:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:57.435 00:53:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:57.435 00:53:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.435 00:53:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.435 00:53:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.435 00:53:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:57.435 00:53:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.435 00:53:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.435 00:53:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.435 00:53:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:57.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:19:57.435 00:19:57.435 --- 10.0.0.2 ping statistics --- 00:19:57.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.435 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:19:57.435 00:53:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:19:57.435 00:19:57.435 --- 10.0.0.1 ping statistics --- 00:19:57.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.435 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:19:57.435 00:53:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.435 00:53:49 -- nvmf/common.sh@411 -- # return 0 00:19:57.435 00:53:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:57.435 00:53:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.435 00:53:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:57.435 00:53:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:57.435 00:53:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.435 00:53:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:57.435 00:53:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:57.435 00:53:49 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:19:57.435 00:53:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:57.435 00:53:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:57.435 00:53:49 -- common/autotest_common.sh@10 -- # set +x 00:19:57.435 00:53:49 -- nvmf/common.sh@470 -- # nvmfpid=1749468 00:19:57.435 00:53:49 -- nvmf/common.sh@471 -- # waitforlisten 1749468 00:19:57.435 00:53:49 -- common/autotest_common.sh@817 -- # '[' -z 1749468 ']' 00:19:57.435 00:53:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.435 00:53:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:57.435 00:53:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.435 00:53:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:57.435 00:53:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:57.435 00:53:49 -- common/autotest_common.sh@10 -- # set +x 00:19:57.435 [2024-04-27 00:53:49.497820] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:57.435 [2024-04-27 00:53:49.497863] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.435 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.435 [2024-04-27 00:53:49.554753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.435 [2024-04-27 00:53:49.632958] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.435 [2024-04-27 00:53:49.632993] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.435 [2024-04-27 00:53:49.633000] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.435 [2024-04-27 00:53:49.633006] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.435 [2024-04-27 00:53:49.633011] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.435 [2024-04-27 00:53:49.633061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.435 [2024-04-27 00:53:49.633158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.435 [2024-04-27 00:53:49.633173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.435 [2024-04-27 00:53:49.633174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.694 00:53:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:57.694 00:53:50 -- common/autotest_common.sh@850 -- # return 0 00:19:57.694 00:53:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:57.694 00:53:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:57.694 00:53:50 -- common/autotest_common.sh@10 -- # set +x 00:19:57.694 00:53:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.694 00:53:50 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:19:57.694 00:53:50 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:00.986 00:53:53 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:00.986 00:53:53 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:00.986 00:53:53 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:20:00.986 00:53:53 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:01.245 00:53:53 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:01.245 00:53:53 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:20:01.245 00:53:53 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:01.245 00:53:53 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:01.245 00:53:53 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:01.245 [2024-04-27 00:53:53.900585] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.245 00:53:53 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:01.505 00:53:54 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:01.505 00:53:54 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:01.764 00:53:54 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:01.764 00:53:54 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:02.024 00:53:54 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.024 [2024-04-27 00:53:54.611209] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.024 00:53:54 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:02.284 00:53:54 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:20:02.284 00:53:54 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:20:02.284 00:53:54 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:02.284 00:53:54 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:20:03.663 Initializing NVMe Controllers 00:20:03.663 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:20:03.663 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:20:03.663 Initialization complete. Launching workers. 00:20:03.663 ======================================================== 00:20:03.663 Latency(us) 00:20:03.663 Device Information : IOPS MiB/s Average min max 00:20:03.663 PCIE (0000:5e:00.0) NSID 1 from core 0: 98344.97 384.16 324.94 9.64 5223.43 00:20:03.663 ======================================================== 00:20:03.663 Total : 98344.97 384.16 324.94 9.64 5223.43 00:20:03.663 00:20:03.663 00:53:56 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:03.663 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.043 Initializing NVMe Controllers 00:20:05.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:05.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:05.043 Initialization complete. Launching workers. 00:20:05.043 ======================================================== 00:20:05.043 Latency(us) 00:20:05.043 Device Information : IOPS MiB/s Average min max 00:20:05.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10318.99 506.69 45172.75 00:20:05.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 45.00 0.18 23031.69 5989.01 47886.83 00:20:05.043 ======================================================== 00:20:05.043 Total : 145.00 0.57 14264.31 506.69 47886.83 00:20:05.043 00:20:05.043 00:53:57 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:05.043 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.981 Initializing NVMe Controllers 00:20:05.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:05.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:05.981 Initialization complete. Launching workers. 00:20:05.981 ======================================================== 00:20:05.981 Latency(us) 00:20:05.982 Device Information : IOPS MiB/s Average min max 00:20:05.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8503.99 33.22 3778.73 686.06 12041.91 00:20:05.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3689.00 14.41 8714.77 4954.46 18121.87 00:20:05.982 ======================================================== 00:20:05.982 Total : 12192.99 47.63 5272.13 686.06 18121.87 00:20:05.982 00:20:05.982 00:53:58 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:05.982 00:53:58 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:05.982 00:53:58 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:05.982 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.521 Initializing NVMe Controllers 00:20:08.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:08.521 Controller IO queue size 128, less than required. 00:20:08.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:08.521 Controller IO queue size 128, less than required. 00:20:08.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:08.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:08.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:08.521 Initialization complete. Launching workers. 00:20:08.521 ======================================================== 00:20:08.521 Latency(us) 00:20:08.521 Device Information : IOPS MiB/s Average min max 00:20:08.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 834.99 208.75 157146.62 97798.86 268398.54 00:20:08.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 554.00 138.50 237312.88 77438.52 341386.55 00:20:08.521 ======================================================== 00:20:08.521 Total : 1388.99 347.25 189120.78 77438.52 341386.55 00:20:08.521 00:20:08.521 00:54:01 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:08.521 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.781 No valid NVMe controllers or AIO or URING devices found 00:20:08.781 Initializing NVMe Controllers 00:20:08.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:08.781 Controller IO queue size 128, less than required. 00:20:08.781 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:08.781 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:08.781 Controller IO queue size 128, less than required. 00:20:08.781 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:08.781 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:08.781 WARNING: Some requested NVMe devices were skipped 00:20:08.781 00:54:01 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:08.781 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.338 Initializing NVMe Controllers 00:20:11.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:11.338 Controller IO queue size 128, less than required. 00:20:11.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:11.338 Controller IO queue size 128, less than required. 00:20:11.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:11.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:11.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:11.338 Initialization complete. Launching workers. 00:20:11.338 00:20:11.338 ==================== 00:20:11.338 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:11.338 TCP transport: 00:20:11.338 polls: 58990 00:20:11.338 idle_polls: 22549 00:20:11.338 sock_completions: 36441 00:20:11.338 nvme_completions: 3437 00:20:11.338 submitted_requests: 5168 00:20:11.338 queued_requests: 1 00:20:11.338 00:20:11.338 ==================== 00:20:11.338 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:11.338 TCP transport: 00:20:11.338 polls: 61593 00:20:11.338 idle_polls: 20807 00:20:11.338 sock_completions: 40786 00:20:11.338 nvme_completions: 3447 00:20:11.338 submitted_requests: 5204 00:20:11.338 queued_requests: 1 00:20:11.338 ======================================================== 00:20:11.338 Latency(us) 00:20:11.338 Device Information : IOPS MiB/s Average min max 00:20:11.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 859.00 214.75 154788.38 78928.16 250683.70 00:20:11.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 861.50 215.37 155366.28 70311.03 238582.19 00:20:11.338 ======================================================== 00:20:11.338 Total : 1720.50 430.12 155077.75 70311.03 250683.70 00:20:11.338 00:20:11.338 00:54:03 -- host/perf.sh@66 -- # sync 00:20:11.338 00:54:03 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.598 00:54:04 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:11.598 00:54:04 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:11.598 00:54:04 -- host/perf.sh@114 -- # nvmftestfini 00:20:11.598 00:54:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:11.598 00:54:04 -- nvmf/common.sh@117 -- # sync 00:20:11.598 00:54:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.598 00:54:04 -- nvmf/common.sh@120 -- # set +e 00:20:11.598 00:54:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.598 00:54:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.598 rmmod nvme_tcp 00:20:11.598 rmmod nvme_fabrics 00:20:11.598 rmmod nvme_keyring 00:20:11.598 00:54:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.598 00:54:04 -- nvmf/common.sh@124 -- # set -e 00:20:11.598 00:54:04 -- nvmf/common.sh@125 -- # return 0 00:20:11.598 00:54:04 -- nvmf/common.sh@478 -- # '[' -n 1749468 ']' 00:20:11.598 00:54:04 -- nvmf/common.sh@479 -- # killprocess 1749468 00:20:11.598 00:54:04 -- common/autotest_common.sh@936 -- # '[' -z 1749468 ']' 00:20:11.598 00:54:04 -- common/autotest_common.sh@940 -- # kill -0 1749468 00:20:11.598 00:54:04 -- common/autotest_common.sh@941 -- # uname 00:20:11.598 00:54:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:11.598 00:54:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1749468 00:20:11.598 00:54:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:11.598 00:54:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:11.598 00:54:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1749468' 00:20:11.598 killing process with pid 1749468 00:20:11.598 00:54:04 -- common/autotest_common.sh@955 -- # kill 1749468 00:20:11.598 00:54:04 -- common/autotest_common.sh@960 -- # wait 1749468 00:20:13.506 00:54:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:13.506 00:54:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:13.506 00:54:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:13.506 00:54:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.506 00:54:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.506 00:54:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.506 00:54:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.506 00:54:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.414 00:54:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.414 00:20:15.414 real 0m23.114s 00:20:15.414 user 1m4.425s 00:20:15.414 sys 0m6.060s 00:20:15.414 00:54:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:15.414 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:20:15.414 ************************************ 00:20:15.414 END TEST nvmf_perf 00:20:15.414 ************************************ 00:20:15.414 00:54:07 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:15.414 00:54:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:15.414 00:54:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.414 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:20:15.414 ************************************ 00:20:15.414 START TEST nvmf_fio_host 00:20:15.414 ************************************ 00:20:15.414 00:54:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:15.414 * Looking for test storage... 00:20:15.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:15.414 00:54:08 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.674 00:54:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.674 00:54:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.674 00:54:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.674 00:54:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.674 00:54:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.674 00:54:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.674 00:54:08 -- paths/export.sh@5 -- # export PATH 00:20:15.674 00:54:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.674 00:54:08 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.674 00:54:08 -- nvmf/common.sh@7 -- # uname -s 00:20:15.674 00:54:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.674 00:54:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.674 00:54:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.674 00:54:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.674 00:54:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.674 00:54:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.674 00:54:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.674 00:54:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.674 00:54:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.674 00:54:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.674 00:54:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:15.675 00:54:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:15.675 00:54:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.675 00:54:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.675 00:54:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.675 00:54:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.675 00:54:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.675 00:54:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.675 00:54:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.675 00:54:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.675 00:54:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.675 00:54:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.675 00:54:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.675 00:54:08 -- paths/export.sh@5 -- # export PATH 00:20:15.675 00:54:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.675 00:54:08 -- nvmf/common.sh@47 -- # : 0 00:20:15.675 00:54:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.675 00:54:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.675 00:54:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.675 00:54:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.675 00:54:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.675 00:54:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.675 00:54:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.675 00:54:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.675 00:54:08 -- host/fio.sh@12 -- # nvmftestinit 00:20:15.675 00:54:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:15.675 00:54:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.675 00:54:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:15.675 00:54:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:15.675 00:54:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:15.675 00:54:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.675 00:54:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.675 00:54:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.675 00:54:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:15.675 00:54:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:15.675 00:54:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.675 00:54:08 -- common/autotest_common.sh@10 -- # set +x 00:20:20.955 00:54:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:20.955 00:54:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.955 00:54:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.955 00:54:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.955 00:54:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.955 00:54:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.955 00:54:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.955 00:54:12 -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.955 00:54:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.955 00:54:12 -- nvmf/common.sh@296 -- # e810=() 00:20:20.955 00:54:12 -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.955 00:54:12 -- nvmf/common.sh@297 -- # x722=() 00:20:20.955 00:54:12 -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.955 00:54:12 -- nvmf/common.sh@298 -- # mlx=() 00:20:20.955 00:54:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.955 00:54:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.955 00:54:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.955 00:54:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.955 00:54:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.955 00:54:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.955 00:54:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.955 00:54:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.955 00:54:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.955 00:54:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:20.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:20.955 00:54:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.955 00:54:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.955 00:54:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.956 00:54:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:20.956 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:20.956 00:54:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.956 00:54:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.956 00:54:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.956 00:54:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:20.956 00:54:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.956 00:54:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:20.956 Found net devices under 0000:86:00.0: cvl_0_0 00:20:20.956 00:54:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.956 00:54:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.956 00:54:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.956 00:54:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:20.956 00:54:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.956 00:54:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:20.956 Found net devices under 0000:86:00.1: cvl_0_1 00:20:20.956 00:54:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.956 00:54:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:20.956 00:54:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:20.956 00:54:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:20.956 00:54:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:20.956 00:54:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.956 00:54:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.956 00:54:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.956 00:54:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:20.956 00:54:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.956 00:54:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.956 00:54:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:20.956 00:54:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.956 00:54:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.956 00:54:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:20.956 00:54:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:20.956 00:54:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.956 00:54:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.956 00:54:13 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.956 00:54:13 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.956 00:54:13 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:20.956 00:54:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.956 00:54:13 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.956 00:54:13 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.956 00:54:13 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:20.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:20:20.956 00:20:20.956 --- 10.0.0.2 ping statistics --- 00:20:20.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.956 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:20:20.956 00:54:13 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:20:20.956 00:20:20.956 --- 10.0.0.1 ping statistics --- 00:20:20.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.956 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:20:20.956 00:54:13 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.956 00:54:13 -- nvmf/common.sh@411 -- # return 0 00:20:20.956 00:54:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:20.956 00:54:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.956 00:54:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:20.956 00:54:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:20.956 00:54:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.956 00:54:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:20.956 00:54:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:20.956 00:54:13 -- host/fio.sh@14 -- # [[ y != y ]] 00:20:20.956 00:54:13 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:20:20.956 00:54:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:20.956 00:54:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.956 00:54:13 -- host/fio.sh@22 -- # nvmfpid=1755967 00:20:20.956 00:54:13 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:20.956 00:54:13 -- host/fio.sh@26 -- # waitforlisten 1755967 00:20:20.956 00:54:13 -- common/autotest_common.sh@817 -- # '[' -z 1755967 ']' 00:20:20.956 00:54:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.956 00:54:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:20.956 00:54:13 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:20.956 00:54:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.956 00:54:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:20.956 00:54:13 -- common/autotest_common.sh@10 -- # set +x 00:20:20.956 [2024-04-27 00:54:13.261349] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:20.956 [2024-04-27 00:54:13.261394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.956 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.956 [2024-04-27 00:54:13.318102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.956 [2024-04-27 00:54:13.396911] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.956 [2024-04-27 00:54:13.396945] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.956 [2024-04-27 00:54:13.396953] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.956 [2024-04-27 00:54:13.396959] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.956 [2024-04-27 00:54:13.396964] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.956 [2024-04-27 00:54:13.397021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.956 [2024-04-27 00:54:13.397105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.956 [2024-04-27 00:54:13.397154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.956 [2024-04-27 00:54:13.397155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.526 00:54:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:21.526 00:54:14 -- common/autotest_common.sh@850 -- # return 0 00:20:21.526 00:54:14 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:21.526 00:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.526 00:54:14 -- common/autotest_common.sh@10 -- # set +x 00:20:21.526 [2024-04-27 00:54:14.076903] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.526 00:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.526 00:54:14 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:20:21.526 00:54:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:21.526 00:54:14 -- common/autotest_common.sh@10 -- # set +x 00:20:21.526 00:54:14 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:21.526 00:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.526 00:54:14 -- common/autotest_common.sh@10 -- # set +x 00:20:21.526 Malloc1 00:20:21.526 00:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.526 00:54:14 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:21.526 00:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.526 00:54:14 -- common/autotest_common.sh@10 -- # set +x 00:20:21.526 00:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.526 00:54:14 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:21.526 00:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.526 00:54:14 -- common/autotest_common.sh@10 -- # set +x 00:20:21.526 00:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.526 00:54:14 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:21.526 00:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.526 00:54:14 -- common/autotest_common.sh@10 -- # set +x 00:20:21.526 [2024-04-27 00:54:14.164635] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.526 00:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.526 00:54:14 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:21.526 00:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.526 00:54:14 -- common/autotest_common.sh@10 -- # set +x 00:20:21.526 00:54:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.526 00:54:14 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:21.526 00:54:14 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:21.526 00:54:14 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:21.526 00:54:14 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:21.526 00:54:14 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:21.526 00:54:14 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:21.526 00:54:14 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:21.526 00:54:14 -- common/autotest_common.sh@1327 -- # shift 00:20:21.526 00:54:14 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:21.526 00:54:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:21.526 00:54:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:21.526 00:54:14 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:21.526 00:54:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:21.526 00:54:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:21.526 00:54:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:21.526 00:54:14 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:21.526 00:54:14 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:21.526 00:54:14 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:21.526 00:54:14 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:21.785 00:54:14 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:21.785 00:54:14 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:21.785 00:54:14 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:21.785 00:54:14 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:22.044 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:22.044 fio-3.35 00:20:22.044 Starting 1 thread 00:20:22.044 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.622 00:20:24.622 test: (groupid=0, jobs=1): err= 0: pid=1756353: Sat Apr 27 00:54:16 2024 00:20:24.622 read: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(82.7MiB/2005msec) 00:20:24.622 slat (nsec): min=1541, max=241320, avg=1744.36, stdev=2332.00 00:20:24.622 clat (usec): min=3679, max=16969, avg=7003.26, stdev=1996.23 00:20:24.622 lat (usec): min=3681, max=16971, avg=7005.00, stdev=1996.33 00:20:24.622 clat percentiles (usec): 00:20:24.622 | 1.00th=[ 4359], 5.00th=[ 5080], 10.00th=[ 5473], 20.00th=[ 5866], 00:20:24.622 | 30.00th=[ 6063], 40.00th=[ 6259], 50.00th=[ 6456], 60.00th=[ 6652], 00:20:24.622 | 70.00th=[ 6915], 80.00th=[ 7439], 90.00th=[ 9634], 95.00th=[11863], 00:20:24.622 | 99.00th=[14746], 99.50th=[15401], 99.90th=[16319], 99.95th=[16581], 00:20:24.622 | 99.99th=[16712] 00:20:24.622 bw ( KiB/s): min=39984, max=43192, per=99.94%, avg=42222.00, stdev=1501.29, samples=4 00:20:24.622 iops : min= 9996, max=10798, avg=10555.50, stdev=375.32, samples=4 00:20:24.622 write: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(82.7MiB/2005msec); 0 zone resets 00:20:24.622 slat (nsec): min=1583, max=235667, avg=1847.74, stdev=1797.19 00:20:24.622 clat (usec): min=2034, max=11380, avg=5032.34, stdev=1069.90 00:20:24.622 lat (usec): min=2036, max=11550, avg=5034.19, stdev=1070.07 00:20:24.622 clat percentiles (usec): 00:20:24.622 | 1.00th=[ 2900], 5.00th=[ 3490], 10.00th=[ 3851], 20.00th=[ 4359], 00:20:24.622 | 30.00th=[ 4621], 40.00th=[ 4817], 50.00th=[ 4948], 60.00th=[ 5145], 00:20:24.622 | 70.00th=[ 5276], 80.00th=[ 5473], 90.00th=[ 5997], 95.00th=[ 7177], 00:20:24.622 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[10552], 99.95th=[11076], 00:20:24.622 | 99.99th=[11338] 00:20:24.622 bw ( KiB/s): min=40576, max=42936, per=99.96%, avg=42228.00, stdev=1121.42, samples=4 00:20:24.622 iops : min=10144, max=10734, avg=10557.00, stdev=280.35, samples=4 00:20:24.622 lat (msec) : 4=6.55%, 10=88.83%, 20=4.62% 00:20:24.622 cpu : usr=68.56%, sys=25.65%, ctx=32, majf=0, minf=4 00:20:24.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:24.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:24.622 issued rwts: total=21177,21175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:24.622 00:20:24.622 Run status group 0 (all jobs): 00:20:24.622 READ: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=82.7MiB (86.7MB), run=2005-2005msec 00:20:24.622 WRITE: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=82.7MiB (86.7MB), run=2005-2005msec 00:20:24.622 00:54:16 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:24.622 00:54:16 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:24.622 00:54:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:24.622 00:54:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:24.622 00:54:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:24.622 00:54:16 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:24.622 00:54:16 -- common/autotest_common.sh@1327 -- # shift 00:20:24.622 00:54:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:24.622 00:54:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.622 00:54:16 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:24.622 00:54:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:24.622 00:54:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:24.622 00:54:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:24.622 00:54:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:24.622 00:54:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.622 00:54:16 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:24.622 00:54:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:24.622 00:54:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:24.622 00:54:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:24.622 00:54:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:24.622 00:54:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:24.622 00:54:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:24.622 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:24.622 fio-3.35 00:20:24.622 Starting 1 thread 00:20:24.622 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.157 00:20:27.157 test: (groupid=0, jobs=1): err= 0: pid=1756847: Sat Apr 27 00:54:19 2024 00:20:27.157 read: IOPS=9354, BW=146MiB/s (153MB/s)(294MiB/2009msec) 00:20:27.157 slat (nsec): min=2540, max=84470, avg=2844.05, stdev=1308.13 00:20:27.157 clat (usec): min=3034, max=35890, avg=8352.63, stdev=3502.51 00:20:27.157 lat (usec): min=3037, max=35896, avg=8355.48, stdev=3502.95 00:20:27.157 clat percentiles (usec): 00:20:27.157 | 1.00th=[ 4015], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6194], 00:20:27.157 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8160], 00:20:27.157 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[10814], 95.00th=[12911], 00:20:27.157 | 99.00th=[26084], 99.50th=[26870], 99.90th=[29230], 99.95th=[29230], 00:20:27.157 | 99.99th=[31851] 00:20:27.157 bw ( KiB/s): min=70848, max=83168, per=49.89%, avg=74680.00, stdev=5767.72, samples=4 00:20:27.157 iops : min= 4428, max= 5198, avg=4667.50, stdev=360.48, samples=4 00:20:27.157 write: IOPS=5489, BW=85.8MiB/s (89.9MB/s)(153MiB/1779msec); 0 zone resets 00:20:27.157 slat (usec): min=29, max=381, avg=32.12, stdev= 7.53 00:20:27.157 clat (usec): min=4213, max=29734, avg=9358.12, stdev=3329.75 00:20:27.157 lat (usec): min=4244, max=29805, avg=9390.24, stdev=3333.03 00:20:27.157 clat percentiles (usec): 00:20:27.157 | 1.00th=[ 5997], 5.00th=[ 6718], 10.00th=[ 7177], 20.00th=[ 7635], 00:20:27.157 | 30.00th=[ 8094], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 9110], 00:20:27.158 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11076], 95.00th=[12256], 00:20:27.158 | 99.00th=[27132], 99.50th=[28967], 99.90th=[29230], 99.95th=[29492], 00:20:27.158 | 99.99th=[29754] 00:20:27.158 bw ( KiB/s): min=73600, max=86624, per=88.49%, avg=77728.00, stdev=6054.85, samples=4 00:20:27.158 iops : min= 4600, max= 5414, avg=4858.00, stdev=378.43, samples=4 00:20:27.158 lat (msec) : 4=0.62%, 10=81.74%, 20=14.92%, 50=2.73% 00:20:27.158 cpu : usr=82.62%, sys=13.79%, ctx=14, majf=0, minf=1 00:20:27.158 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:27.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:27.158 issued rwts: total=18794,9766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.158 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:27.158 00:20:27.158 Run status group 0 (all jobs): 00:20:27.158 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=294MiB (308MB), run=2009-2009msec 00:20:27.158 WRITE: bw=85.8MiB/s (89.9MB/s), 85.8MiB/s-85.8MiB/s (89.9MB/s-89.9MB/s), io=153MiB (160MB), run=1779-1779msec 00:20:27.158 00:54:19 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.158 00:54:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.158 00:54:19 -- common/autotest_common.sh@10 -- # set +x 00:20:27.158 00:54:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.158 00:54:19 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:20:27.158 00:54:19 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:20:27.158 00:54:19 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:20:27.158 00:54:19 -- host/fio.sh@84 -- # nvmftestfini 00:20:27.158 00:54:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:27.158 00:54:19 -- nvmf/common.sh@117 -- # sync 00:20:27.158 00:54:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.158 00:54:19 -- nvmf/common.sh@120 -- # set +e 00:20:27.158 00:54:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.158 00:54:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.158 rmmod nvme_tcp 00:20:27.158 rmmod nvme_fabrics 00:20:27.158 rmmod nvme_keyring 00:20:27.158 00:54:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.158 00:54:19 -- nvmf/common.sh@124 -- # set -e 00:20:27.158 00:54:19 -- nvmf/common.sh@125 -- # return 0 00:20:27.158 00:54:19 -- nvmf/common.sh@478 -- # '[' -n 1755967 ']' 00:20:27.158 00:54:19 -- nvmf/common.sh@479 -- # killprocess 1755967 00:20:27.158 00:54:19 -- common/autotest_common.sh@936 -- # '[' -z 1755967 ']' 00:20:27.158 00:54:19 -- common/autotest_common.sh@940 -- # kill -0 1755967 00:20:27.158 00:54:19 -- common/autotest_common.sh@941 -- # uname 00:20:27.158 00:54:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:27.158 00:54:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1755967 00:20:27.158 00:54:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:27.158 00:54:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:27.158 00:54:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1755967' 00:20:27.158 killing process with pid 1755967 00:20:27.158 00:54:19 -- common/autotest_common.sh@955 -- # kill 1755967 00:20:27.158 00:54:19 -- common/autotest_common.sh@960 -- # wait 1755967 00:20:27.158 00:54:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:27.158 00:54:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:27.158 00:54:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:27.158 00:54:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.158 00:54:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.158 00:54:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.158 00:54:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.158 00:54:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.698 00:54:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:29.698 00:20:29.698 real 0m13.843s 00:20:29.698 user 0m40.959s 00:20:29.698 sys 0m5.630s 00:20:29.698 00:54:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:29.698 00:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:29.698 ************************************ 00:20:29.698 END TEST nvmf_fio_host 00:20:29.698 ************************************ 00:20:29.698 00:54:21 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:29.698 00:54:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:29.698 00:54:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:29.698 00:54:21 -- common/autotest_common.sh@10 -- # set +x 00:20:29.698 ************************************ 00:20:29.698 START TEST nvmf_failover 00:20:29.698 ************************************ 00:20:29.699 00:54:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:29.699 * Looking for test storage... 00:20:29.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:29.699 00:54:22 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.699 00:54:22 -- nvmf/common.sh@7 -- # uname -s 00:20:29.699 00:54:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.699 00:54:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.699 00:54:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.699 00:54:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.699 00:54:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.699 00:54:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.699 00:54:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.699 00:54:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.699 00:54:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.699 00:54:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.699 00:54:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:29.699 00:54:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:29.699 00:54:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.699 00:54:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.699 00:54:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.699 00:54:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.699 00:54:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.699 00:54:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.699 00:54:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.699 00:54:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.699 00:54:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.699 00:54:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.699 00:54:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.699 00:54:22 -- paths/export.sh@5 -- # export PATH 00:20:29.699 00:54:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.699 00:54:22 -- nvmf/common.sh@47 -- # : 0 00:20:29.699 00:54:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.699 00:54:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.699 00:54:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.699 00:54:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.699 00:54:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.699 00:54:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.699 00:54:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.699 00:54:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.699 00:54:22 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:29.699 00:54:22 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:29.699 00:54:22 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.699 00:54:22 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.699 00:54:22 -- host/failover.sh@18 -- # nvmftestinit 00:20:29.699 00:54:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:29.699 00:54:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.699 00:54:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:29.699 00:54:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:29.699 00:54:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:29.699 00:54:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.699 00:54:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.699 00:54:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.699 00:54:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:29.699 00:54:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:29.699 00:54:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:29.699 00:54:22 -- common/autotest_common.sh@10 -- # set +x 00:20:34.983 00:54:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:34.983 00:54:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.983 00:54:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.983 00:54:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.983 00:54:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.983 00:54:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.983 00:54:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.983 00:54:27 -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.983 00:54:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.983 00:54:27 -- nvmf/common.sh@296 -- # e810=() 00:20:34.983 00:54:27 -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.983 00:54:27 -- nvmf/common.sh@297 -- # x722=() 00:20:34.983 00:54:27 -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.983 00:54:27 -- nvmf/common.sh@298 -- # mlx=() 00:20:34.983 00:54:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.983 00:54:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.983 00:54:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.983 00:54:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.983 00:54:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.983 00:54:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.983 00:54:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:34.983 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:34.983 00:54:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.983 00:54:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:34.983 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:34.983 00:54:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.983 00:54:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.983 00:54:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.983 00:54:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.983 00:54:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.983 00:54:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:34.983 Found net devices under 0000:86:00.0: cvl_0_0 00:20:34.983 00:54:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.983 00:54:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.983 00:54:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.983 00:54:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.983 00:54:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.983 00:54:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:34.983 Found net devices under 0000:86:00.1: cvl_0_1 00:20:34.983 00:54:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.983 00:54:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:34.983 00:54:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:34.983 00:54:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:34.983 00:54:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:34.983 00:54:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.983 00:54:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.983 00:54:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.983 00:54:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:34.983 00:54:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.983 00:54:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.983 00:54:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:34.983 00:54:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.983 00:54:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.983 00:54:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:34.983 00:54:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:34.983 00:54:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.983 00:54:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.983 00:54:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.983 00:54:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.983 00:54:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.983 00:54:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.983 00:54:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.983 00:54:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.983 00:54:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:20:34.983 00:20:34.983 --- 10.0.0.2 ping statistics --- 00:20:34.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.984 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:20:34.984 00:54:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.476 ms 00:20:34.984 00:20:34.984 --- 10.0.0.1 ping statistics --- 00:20:34.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.984 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:20:34.984 00:54:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.984 00:54:27 -- nvmf/common.sh@411 -- # return 0 00:20:34.984 00:54:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:34.984 00:54:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.984 00:54:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:34.984 00:54:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:34.984 00:54:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.984 00:54:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:34.984 00:54:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:34.984 00:54:27 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:34.984 00:54:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:34.984 00:54:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.984 00:54:27 -- common/autotest_common.sh@10 -- # set +x 00:20:34.984 00:54:27 -- nvmf/common.sh@470 -- # nvmfpid=1760810 00:20:34.984 00:54:27 -- nvmf/common.sh@471 -- # waitforlisten 1760810 00:20:34.984 00:54:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:34.984 00:54:27 -- common/autotest_common.sh@817 -- # '[' -z 1760810 ']' 00:20:34.984 00:54:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.984 00:54:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.984 00:54:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.984 00:54:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.984 00:54:27 -- common/autotest_common.sh@10 -- # set +x 00:20:35.244 [2024-04-27 00:54:27.714917] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:35.244 [2024-04-27 00:54:27.714962] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.244 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.244 [2024-04-27 00:54:27.771853] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:35.244 [2024-04-27 00:54:27.841737] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.244 [2024-04-27 00:54:27.841779] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.244 [2024-04-27 00:54:27.841785] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.244 [2024-04-27 00:54:27.841792] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.244 [2024-04-27 00:54:27.841797] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.244 [2024-04-27 00:54:27.841897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.244 [2024-04-27 00:54:27.841961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.245 [2024-04-27 00:54:27.841962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.184 00:54:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:36.184 00:54:28 -- common/autotest_common.sh@850 -- # return 0 00:20:36.184 00:54:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:36.184 00:54:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:36.184 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.184 00:54:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.184 00:54:28 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:36.184 [2024-04-27 00:54:28.707357] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.184 00:54:28 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:36.444 Malloc0 00:20:36.444 00:54:28 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:36.444 00:54:29 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:36.703 00:54:29 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.963 [2024-04-27 00:54:29.466302] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.963 00:54:29 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:36.963 [2024-04-27 00:54:29.642733] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:37.223 00:54:29 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:37.223 [2024-04-27 00:54:29.819306] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:37.223 00:54:29 -- host/failover.sh@31 -- # bdevperf_pid=1761076 00:20:37.223 00:54:29 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:37.223 00:54:29 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.223 00:54:29 -- host/failover.sh@34 -- # waitforlisten 1761076 /var/tmp/bdevperf.sock 00:20:37.223 00:54:29 -- common/autotest_common.sh@817 -- # '[' -z 1761076 ']' 00:20:37.223 00:54:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.223 00:54:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:37.223 00:54:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.223 00:54:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:37.223 00:54:29 -- common/autotest_common.sh@10 -- # set +x 00:20:38.166 00:54:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:38.166 00:54:30 -- common/autotest_common.sh@850 -- # return 0 00:20:38.166 00:54:30 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.426 NVMe0n1 00:20:38.426 00:54:30 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:38.685 00:20:38.944 00:54:31 -- host/failover.sh@39 -- # run_test_pid=1761325 00:20:38.944 00:54:31 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:38.944 00:54:31 -- host/failover.sh@41 -- # sleep 1 00:20:39.883 00:54:32 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.883 [2024-04-27 00:54:32.558733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558799] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558826] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558843] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558854] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558889] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558952] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558963] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558975] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.558997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559021] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559026] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559032] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559056] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559091] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559123] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:39.883 [2024-04-27 00:54:32.559129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dff6d0 is same with the state(5) to be set 00:20:40.143 00:54:32 -- host/failover.sh@45 -- # sleep 3 00:20:43.436 00:54:35 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:43.436 00:20:43.436 00:54:35 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:43.696 [2024-04-27 00:54:36.152229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152317] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152450] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152474] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 [2024-04-27 00:54:36.152505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00a90 is same with the state(5) to be set 00:20:43.696 00:54:36 -- host/failover.sh@50 -- # sleep 3 00:20:46.993 00:54:39 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:46.993 [2024-04-27 00:54:39.352055] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.993 00:54:39 -- host/failover.sh@55 -- # sleep 1 00:20:47.934 00:54:40 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:47.934 [2024-04-27 00:54:40.553437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553499] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553552] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553585] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 [2024-04-27 00:54:40.553603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e01170 is same with the state(5) to be set 00:20:47.934 00:54:40 -- host/failover.sh@59 -- # wait 1761325 00:20:54.516 0 00:20:54.516 00:54:46 -- host/failover.sh@61 -- # killprocess 1761076 00:20:54.516 00:54:46 -- common/autotest_common.sh@936 -- # '[' -z 1761076 ']' 00:20:54.516 00:54:46 -- common/autotest_common.sh@940 -- # kill -0 1761076 00:20:54.516 00:54:46 -- common/autotest_common.sh@941 -- # uname 00:20:54.516 00:54:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:54.516 00:54:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1761076 00:20:54.516 00:54:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:54.516 00:54:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:54.516 00:54:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1761076' 00:20:54.516 killing process with pid 1761076 00:20:54.516 00:54:46 -- common/autotest_common.sh@955 -- # kill 1761076 00:20:54.516 00:54:46 -- common/autotest_common.sh@960 -- # wait 1761076 00:20:54.516 00:54:46 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:54.516 [2024-04-27 00:54:29.885710] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:54.516 [2024-04-27 00:54:29.885754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1761076 ] 00:20:54.516 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.516 [2024-04-27 00:54:29.938536] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.516 [2024-04-27 00:54:30.013906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.516 Running I/O for 15 seconds... 00:20:54.516 [2024-04-27 00:54:32.559517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.516 [2024-04-27 00:54:32.559778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.516 [2024-04-27 00:54:32.559784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.559988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.559994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.517 [2024-04-27 00:54:32.560056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.517 [2024-04-27 00:54:32.560076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.517 [2024-04-27 00:54:32.560092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.517 [2024-04-27 00:54:32.560106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.517 [2024-04-27 00:54:32.560123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.517 [2024-04-27 00:54:32.560138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.517 [2024-04-27 00:54:32.560152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.517 [2024-04-27 00:54:32.560167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.517 [2024-04-27 00:54:32.560376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.517 [2024-04-27 00:54:32.560383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.518 [2024-04-27 00:54:32.560775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.518 [2024-04-27 00:54:32.560963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.518 [2024-04-27 00:54:32.560969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.560978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.560984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.560992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.560999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.561014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.561158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.561174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.561189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.561203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.561218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.561233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.561248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.519 [2024-04-27 00:54:32.561263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.519 [2024-04-27 00:54:32.561495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1096560 is same with the state(5) to be set 00:20:54.519 [2024-04-27 00:54:32.561512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.519 [2024-04-27 00:54:32.561518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.519 [2024-04-27 00:54:32.561526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96144 len:8 PRP1 0x0 PRP2 0x0 00:20:54.519 [2024-04-27 00:54:32.561533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561576] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1096560 was disconnected and freed. reset controller. 00:20:54.519 [2024-04-27 00:54:32.561585] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:20:54.519 [2024-04-27 00:54:32.561606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.519 [2024-04-27 00:54:32.561614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.519 [2024-04-27 00:54:32.561628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.519 [2024-04-27 00:54:32.561635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.519 [2024-04-27 00:54:32.561642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:32.561649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.520 [2024-04-27 00:54:32.561655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:32.561662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:54.520 [2024-04-27 00:54:32.564513] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:54.520 [2024-04-27 00:54:32.564540] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1077530 (9): Bad file descriptor 00:20:54.520 [2024-04-27 00:54:32.730366] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:54.520 [2024-04-27 00:54:36.152705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.520 [2024-04-27 00:54:36.152979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.152989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.152996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.520 [2024-04-27 00:54:36.153226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.520 [2024-04-27 00:54:36.153232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.521 [2024-04-27 00:54:36.153603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.521 [2024-04-27 00:54:36.153757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.521 [2024-04-27 00:54:36.153765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.153964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.153981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.153989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.153996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.522 [2024-04-27 00:54:36.154337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.154352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.522 [2024-04-27 00:54:36.154367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.522 [2024-04-27 00:54:36.154375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.523 [2024-04-27 00:54:36.154577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:36.154592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:36.154607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:36.154622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:36.154637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:36.154652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:36.154667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:36.154681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1098560 is same with the state(5) to be set 00:20:54.523 [2024-04-27 00:54:36.154700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.523 [2024-04-27 00:54:36.154706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.523 [2024-04-27 00:54:36.154714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60936 len:8 PRP1 0x0 PRP2 0x0 00:20:54.523 [2024-04-27 00:54:36.154721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154763] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1098560 was disconnected and freed. reset controller. 00:20:54.523 [2024-04-27 00:54:36.154772] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:20:54.523 [2024-04-27 00:54:36.154793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.523 [2024-04-27 00:54:36.154801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.523 [2024-04-27 00:54:36.154814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.523 [2024-04-27 00:54:36.154829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.523 [2024-04-27 00:54:36.154842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:36.154848] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:54.523 [2024-04-27 00:54:36.154871] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1077530 (9): Bad file descriptor 00:20:54.523 [2024-04-27 00:54:36.157691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:54.523 [2024-04-27 00:54:36.230988] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:54.523 [2024-04-27 00:54:40.553846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.553881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:40.553897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.553905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:40.553914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.553922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:40.553930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.553938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:40.553947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.553957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:40.553965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.553972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:40.553980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.553987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:40.553996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.554002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:40.554010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.554017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:40.554025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.554032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.523 [2024-04-27 00:54:40.554040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.523 [2024-04-27 00:54:40.554047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.524 [2024-04-27 00:54:40.554062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.524 [2024-04-27 00:54:40.554082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.524 [2024-04-27 00:54:40.554096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.524 [2024-04-27 00:54:40.554112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.524 [2024-04-27 00:54:40.554650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.524 [2024-04-27 00:54:40.554657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.525 [2024-04-27 00:54:40.554671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.525 [2024-04-27 00:54:40.554686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.525 [2024-04-27 00:54:40.554700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.525 [2024-04-27 00:54:40.554715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.554989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.554998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.555012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.555027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.555042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.555057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.555075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.555090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.555105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.555122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.555137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.525 [2024-04-27 00:54:40.555152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.525 [2024-04-27 00:54:40.555159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:54.526 [2024-04-27 00:54:40.555583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.526 [2024-04-27 00:54:40.555758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.526 [2024-04-27 00:54:40.555764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.527 [2024-04-27 00:54:40.555773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.527 [2024-04-27 00:54:40.555779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.527 [2024-04-27 00:54:40.555789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.527 [2024-04-27 00:54:40.555795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.527 [2024-04-27 00:54:40.555803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:54.527 [2024-04-27 00:54:40.555810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.527 [2024-04-27 00:54:40.555818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083af0 is same with the state(5) to be set 00:20:54.527 [2024-04-27 00:54:40.555827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:54.527 [2024-04-27 00:54:40.555833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:54.527 [2024-04-27 00:54:40.555839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71968 len:8 PRP1 0x0 PRP2 0x0 00:20:54.527 [2024-04-27 00:54:40.555846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.527 [2024-04-27 00:54:40.555888] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1083af0 was disconnected and freed. reset controller. 00:20:54.527 [2024-04-27 00:54:40.555898] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:20:54.527 [2024-04-27 00:54:40.555919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.527 [2024-04-27 00:54:40.555927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.527 [2024-04-27 00:54:40.555936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.527 [2024-04-27 00:54:40.555943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.527 [2024-04-27 00:54:40.555950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.527 [2024-04-27 00:54:40.555957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.527 [2024-04-27 00:54:40.555964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:54.527 [2024-04-27 00:54:40.555970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:54.527 [2024-04-27 00:54:40.555977] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:54.527 [2024-04-27 00:54:40.558809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:54.527 [2024-04-27 00:54:40.558836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1077530 (9): Bad file descriptor 00:20:54.527 [2024-04-27 00:54:40.719260] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:54.527 00:20:54.527 Latency(us) 00:20:54.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.527 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:54.527 Verification LBA range: start 0x0 length 0x4000 00:20:54.527 NVMe0n1 : 15.01 10680.31 41.72 1085.82 0.00 10855.89 1061.40 26670.30 00:20:54.527 =================================================================================================================== 00:20:54.527 Total : 10680.31 41.72 1085.82 0.00 10855.89 1061.40 26670.30 00:20:54.527 Received shutdown signal, test time was about 15.000000 seconds 00:20:54.527 00:20:54.527 Latency(us) 00:20:54.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.527 =================================================================================================================== 00:20:54.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.527 00:54:46 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:20:54.527 00:54:46 -- host/failover.sh@65 -- # count=3 00:20:54.527 00:54:46 -- host/failover.sh@67 -- # (( count != 3 )) 00:20:54.527 00:54:46 -- host/failover.sh@73 -- # bdevperf_pid=1763838 00:20:54.527 00:54:46 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:20:54.527 00:54:46 -- host/failover.sh@75 -- # waitforlisten 1763838 /var/tmp/bdevperf.sock 00:20:54.527 00:54:46 -- common/autotest_common.sh@817 -- # '[' -z 1763838 ']' 00:20:54.527 00:54:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.527 00:54:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:54.527 00:54:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.527 00:54:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:54.527 00:54:46 -- common/autotest_common.sh@10 -- # set +x 00:20:55.097 00:54:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:55.097 00:54:47 -- common/autotest_common.sh@850 -- # return 0 00:20:55.097 00:54:47 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:55.356 [2024-04-27 00:54:47.825233] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:55.356 00:54:47 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:55.356 [2024-04-27 00:54:47.997719] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:55.356 00:54:48 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:55.923 NVMe0n1 00:20:55.923 00:54:48 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:56.182 00:20:56.182 00:54:48 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:56.440 00:20:56.440 00:54:49 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:56.440 00:54:49 -- host/failover.sh@82 -- # grep -q NVMe0 00:20:56.700 00:54:49 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:56.959 00:54:49 -- host/failover.sh@87 -- # sleep 3 00:21:00.289 00:54:52 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:00.289 00:54:52 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:00.289 00:54:52 -- host/failover.sh@90 -- # run_test_pid=1764815 00:21:00.289 00:54:52 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.289 00:54:52 -- host/failover.sh@92 -- # wait 1764815 00:21:01.226 0 00:21:01.226 00:54:53 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:01.226 [2024-04-27 00:54:46.852805] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:21:01.226 [2024-04-27 00:54:46.852873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763838 ] 00:21:01.226 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.226 [2024-04-27 00:54:46.908275] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.226 [2024-04-27 00:54:46.975246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.226 [2024-04-27 00:54:49.373801] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:01.226 [2024-04-27 00:54:49.373847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.226 [2024-04-27 00:54:49.373858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.226 [2024-04-27 00:54:49.373866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.226 [2024-04-27 00:54:49.373873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.226 [2024-04-27 00:54:49.373880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.226 [2024-04-27 00:54:49.373887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.226 [2024-04-27 00:54:49.373894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.226 [2024-04-27 00:54:49.373901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.226 [2024-04-27 00:54:49.373907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.226 [2024-04-27 00:54:49.373931] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.226 [2024-04-27 00:54:49.373945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2111530 (9): Bad file descriptor 00:21:01.226 [2024-04-27 00:54:49.508283] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:01.226 Running I/O for 1 seconds... 00:21:01.226 00:21:01.226 Latency(us) 00:21:01.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.226 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:01.226 Verification LBA range: start 0x0 length 0x4000 00:21:01.226 NVMe0n1 : 1.00 10383.95 40.56 0.00 0.00 12278.53 2635.69 29063.79 00:21:01.226 =================================================================================================================== 00:21:01.226 Total : 10383.95 40.56 0.00 0.00 12278.53 2635.69 29063.79 00:21:01.226 00:54:53 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:01.226 00:54:53 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:01.226 00:54:53 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:01.486 00:54:54 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:01.486 00:54:54 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:01.744 00:54:54 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:02.003 00:54:54 -- host/failover.sh@101 -- # sleep 3 00:21:05.288 00:54:57 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.288 00:54:57 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:05.288 00:54:57 -- host/failover.sh@108 -- # killprocess 1763838 00:21:05.288 00:54:57 -- common/autotest_common.sh@936 -- # '[' -z 1763838 ']' 00:21:05.288 00:54:57 -- common/autotest_common.sh@940 -- # kill -0 1763838 00:21:05.288 00:54:57 -- common/autotest_common.sh@941 -- # uname 00:21:05.288 00:54:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:05.288 00:54:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1763838 00:21:05.288 00:54:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:05.288 00:54:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:05.288 00:54:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1763838' 00:21:05.288 killing process with pid 1763838 00:21:05.288 00:54:57 -- common/autotest_common.sh@955 -- # kill 1763838 00:21:05.288 00:54:57 -- common/autotest_common.sh@960 -- # wait 1763838 00:21:05.288 00:54:57 -- host/failover.sh@110 -- # sync 00:21:05.288 00:54:57 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:05.548 00:54:58 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:05.548 00:54:58 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:05.548 00:54:58 -- host/failover.sh@116 -- # nvmftestfini 00:21:05.548 00:54:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:05.548 00:54:58 -- nvmf/common.sh@117 -- # sync 00:21:05.548 00:54:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.548 00:54:58 -- nvmf/common.sh@120 -- # set +e 00:21:05.548 00:54:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.548 00:54:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.548 rmmod nvme_tcp 00:21:05.548 rmmod nvme_fabrics 00:21:05.548 rmmod nvme_keyring 00:21:05.548 00:54:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.548 00:54:58 -- nvmf/common.sh@124 -- # set -e 00:21:05.548 00:54:58 -- nvmf/common.sh@125 -- # return 0 00:21:05.548 00:54:58 -- nvmf/common.sh@478 -- # '[' -n 1760810 ']' 00:21:05.548 00:54:58 -- nvmf/common.sh@479 -- # killprocess 1760810 00:21:05.548 00:54:58 -- common/autotest_common.sh@936 -- # '[' -z 1760810 ']' 00:21:05.548 00:54:58 -- common/autotest_common.sh@940 -- # kill -0 1760810 00:21:05.548 00:54:58 -- common/autotest_common.sh@941 -- # uname 00:21:05.548 00:54:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:05.548 00:54:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1760810 00:21:05.548 00:54:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:05.548 00:54:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:05.548 00:54:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1760810' 00:21:05.548 killing process with pid 1760810 00:21:05.548 00:54:58 -- common/autotest_common.sh@955 -- # kill 1760810 00:21:05.548 00:54:58 -- common/autotest_common.sh@960 -- # wait 1760810 00:21:05.808 00:54:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:05.808 00:54:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:05.808 00:54:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:05.808 00:54:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.808 00:54:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.808 00:54:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.808 00:54:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.808 00:54:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.356 00:55:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:08.356 00:21:08.356 real 0m38.423s 00:21:08.356 user 2m3.924s 00:21:08.356 sys 0m7.361s 00:21:08.356 00:55:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:08.356 00:55:00 -- common/autotest_common.sh@10 -- # set +x 00:21:08.356 ************************************ 00:21:08.356 END TEST nvmf_failover 00:21:08.356 ************************************ 00:21:08.356 00:55:00 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:08.356 00:55:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:08.356 00:55:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:08.356 00:55:00 -- common/autotest_common.sh@10 -- # set +x 00:21:08.356 ************************************ 00:21:08.356 START TEST nvmf_discovery 00:21:08.356 ************************************ 00:21:08.356 00:55:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:08.356 * Looking for test storage... 00:21:08.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:08.357 00:55:00 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:08.357 00:55:00 -- nvmf/common.sh@7 -- # uname -s 00:21:08.357 00:55:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:08.357 00:55:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:08.357 00:55:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:08.357 00:55:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:08.357 00:55:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:08.357 00:55:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:08.357 00:55:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:08.357 00:55:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:08.357 00:55:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:08.357 00:55:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:08.357 00:55:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:08.357 00:55:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:08.357 00:55:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:08.357 00:55:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:08.357 00:55:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:08.357 00:55:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:08.357 00:55:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:08.357 00:55:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:08.357 00:55:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:08.357 00:55:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:08.357 00:55:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.357 00:55:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.357 00:55:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.357 00:55:00 -- paths/export.sh@5 -- # export PATH 00:21:08.357 00:55:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:08.357 00:55:00 -- nvmf/common.sh@47 -- # : 0 00:21:08.357 00:55:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:08.357 00:55:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:08.357 00:55:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:08.357 00:55:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:08.357 00:55:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:08.357 00:55:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:08.357 00:55:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:08.357 00:55:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:08.357 00:55:00 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:08.357 00:55:00 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:08.357 00:55:00 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:08.357 00:55:00 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:08.357 00:55:00 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:08.357 00:55:00 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:08.357 00:55:00 -- host/discovery.sh@25 -- # nvmftestinit 00:21:08.357 00:55:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:08.357 00:55:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:08.357 00:55:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:08.357 00:55:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:08.357 00:55:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:08.357 00:55:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.357 00:55:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:08.357 00:55:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:08.357 00:55:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:08.357 00:55:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:08.357 00:55:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:08.357 00:55:00 -- common/autotest_common.sh@10 -- # set +x 00:21:13.635 00:55:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:13.635 00:55:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:13.635 00:55:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:13.635 00:55:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:13.635 00:55:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:13.635 00:55:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:13.635 00:55:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:13.635 00:55:05 -- nvmf/common.sh@295 -- # net_devs=() 00:21:13.635 00:55:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:13.635 00:55:05 -- nvmf/common.sh@296 -- # e810=() 00:21:13.635 00:55:05 -- nvmf/common.sh@296 -- # local -ga e810 00:21:13.635 00:55:05 -- nvmf/common.sh@297 -- # x722=() 00:21:13.635 00:55:05 -- nvmf/common.sh@297 -- # local -ga x722 00:21:13.635 00:55:05 -- nvmf/common.sh@298 -- # mlx=() 00:21:13.635 00:55:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:13.635 00:55:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.635 00:55:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:13.635 00:55:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:13.635 00:55:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:13.635 00:55:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.635 00:55:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:13.635 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:13.635 00:55:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.635 00:55:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:13.635 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:13.635 00:55:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.635 00:55:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:13.636 00:55:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:13.636 00:55:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:13.636 00:55:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.636 00:55:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.636 00:55:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:13.636 00:55:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.636 00:55:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:13.636 Found net devices under 0000:86:00.0: cvl_0_0 00:21:13.636 00:55:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.636 00:55:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.636 00:55:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.636 00:55:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:13.636 00:55:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.636 00:55:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:13.636 Found net devices under 0000:86:00.1: cvl_0_1 00:21:13.636 00:55:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.636 00:55:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:13.636 00:55:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:13.636 00:55:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:13.636 00:55:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:13.636 00:55:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:13.636 00:55:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.636 00:55:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.636 00:55:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.636 00:55:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:13.636 00:55:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.636 00:55:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.636 00:55:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:13.636 00:55:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.636 00:55:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.636 00:55:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:13.636 00:55:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:13.636 00:55:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.636 00:55:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.636 00:55:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.636 00:55:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.636 00:55:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:13.636 00:55:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.636 00:55:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.636 00:55:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.636 00:55:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:13.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:21:13.636 00:21:13.636 --- 10.0.0.2 ping statistics --- 00:21:13.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.636 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:13.636 00:55:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:21:13.636 00:21:13.636 --- 10.0.0.1 ping statistics --- 00:21:13.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.636 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:21:13.636 00:55:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.636 00:55:05 -- nvmf/common.sh@411 -- # return 0 00:21:13.636 00:55:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:13.636 00:55:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.636 00:55:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:13.636 00:55:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:13.636 00:55:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.636 00:55:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:13.636 00:55:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:13.636 00:55:05 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:13.636 00:55:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:13.636 00:55:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:13.636 00:55:05 -- common/autotest_common.sh@10 -- # set +x 00:21:13.636 00:55:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:13.636 00:55:05 -- nvmf/common.sh@470 -- # nvmfpid=1769216 00:21:13.636 00:55:05 -- nvmf/common.sh@471 -- # waitforlisten 1769216 00:21:13.636 00:55:05 -- common/autotest_common.sh@817 -- # '[' -z 1769216 ']' 00:21:13.636 00:55:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.636 00:55:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:13.636 00:55:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.636 00:55:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:13.636 00:55:05 -- common/autotest_common.sh@10 -- # set +x 00:21:13.636 [2024-04-27 00:55:05.981923] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:21:13.636 [2024-04-27 00:55:05.981966] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.636 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.636 [2024-04-27 00:55:06.038517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.636 [2024-04-27 00:55:06.115117] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.636 [2024-04-27 00:55:06.115155] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.636 [2024-04-27 00:55:06.115163] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.636 [2024-04-27 00:55:06.115169] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.636 [2024-04-27 00:55:06.115174] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.636 [2024-04-27 00:55:06.115190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.205 00:55:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:14.205 00:55:06 -- common/autotest_common.sh@850 -- # return 0 00:21:14.205 00:55:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:14.205 00:55:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:14.205 00:55:06 -- common/autotest_common.sh@10 -- # set +x 00:21:14.205 00:55:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.205 00:55:06 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:14.205 00:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.205 00:55:06 -- common/autotest_common.sh@10 -- # set +x 00:21:14.205 [2024-04-27 00:55:06.830291] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.205 00:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.205 00:55:06 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:14.205 00:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.205 00:55:06 -- common/autotest_common.sh@10 -- # set +x 00:21:14.205 [2024-04-27 00:55:06.838415] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:14.205 00:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.205 00:55:06 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:14.205 00:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.205 00:55:06 -- common/autotest_common.sh@10 -- # set +x 00:21:14.205 null0 00:21:14.205 00:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.205 00:55:06 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:14.205 00:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.205 00:55:06 -- common/autotest_common.sh@10 -- # set +x 00:21:14.205 null1 00:21:14.205 00:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.205 00:55:06 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:14.205 00:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.205 00:55:06 -- common/autotest_common.sh@10 -- # set +x 00:21:14.205 00:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.205 00:55:06 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:14.205 00:55:06 -- host/discovery.sh@45 -- # hostpid=1769399 00:21:14.205 00:55:06 -- host/discovery.sh@46 -- # waitforlisten 1769399 /tmp/host.sock 00:21:14.205 00:55:06 -- common/autotest_common.sh@817 -- # '[' -z 1769399 ']' 00:21:14.205 00:55:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:21:14.205 00:55:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:14.205 00:55:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:14.205 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:14.205 00:55:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:14.205 00:55:06 -- common/autotest_common.sh@10 -- # set +x 00:21:14.205 [2024-04-27 00:55:06.900264] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:21:14.205 [2024-04-27 00:55:06.900308] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769399 ] 00:21:14.465 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.465 [2024-04-27 00:55:06.950461] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.465 [2024-04-27 00:55:07.027218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.034 00:55:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:15.034 00:55:07 -- common/autotest_common.sh@850 -- # return 0 00:21:15.034 00:55:07 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.034 00:55:07 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:15.034 00:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.034 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:21:15.034 00:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.034 00:55:07 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:15.034 00:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.034 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:21:15.034 00:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.034 00:55:07 -- host/discovery.sh@72 -- # notify_id=0 00:21:15.034 00:55:07 -- host/discovery.sh@83 -- # get_subsystem_names 00:21:15.034 00:55:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:15.034 00:55:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:15.034 00:55:07 -- host/discovery.sh@59 -- # sort 00:21:15.034 00:55:07 -- host/discovery.sh@59 -- # xargs 00:21:15.034 00:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.034 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:21:15.034 00:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.293 00:55:07 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:15.293 00:55:07 -- host/discovery.sh@84 -- # get_bdev_list 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:15.293 00:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # sort 00:21:15.293 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # xargs 00:21:15.293 00:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.293 00:55:07 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:15.293 00:55:07 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:15.293 00:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.293 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:21:15.293 00:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.293 00:55:07 -- host/discovery.sh@87 -- # get_subsystem_names 00:21:15.293 00:55:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:15.293 00:55:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:15.293 00:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.293 00:55:07 -- host/discovery.sh@59 -- # sort 00:21:15.293 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:21:15.293 00:55:07 -- host/discovery.sh@59 -- # xargs 00:21:15.293 00:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.293 00:55:07 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:15.293 00:55:07 -- host/discovery.sh@88 -- # get_bdev_list 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.293 00:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.293 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # xargs 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # sort 00:21:15.293 00:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.293 00:55:07 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:15.293 00:55:07 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:15.293 00:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.293 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:21:15.293 00:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.293 00:55:07 -- host/discovery.sh@91 -- # get_subsystem_names 00:21:15.293 00:55:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:15.293 00:55:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:15.293 00:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.293 00:55:07 -- host/discovery.sh@59 -- # sort 00:21:15.293 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:21:15.293 00:55:07 -- host/discovery.sh@59 -- # xargs 00:21:15.293 00:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.293 00:55:07 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:15.293 00:55:07 -- host/discovery.sh@92 -- # get_bdev_list 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # sort 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:15.293 00:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.293 00:55:07 -- common/autotest_common.sh@10 -- # set +x 00:21:15.293 00:55:07 -- host/discovery.sh@55 -- # xargs 00:21:15.552 00:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.552 00:55:08 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:15.552 00:55:08 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:15.552 00:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.552 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:21:15.552 [2024-04-27 00:55:08.033577] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:15.552 00:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.552 00:55:08 -- host/discovery.sh@97 -- # get_subsystem_names 00:21:15.552 00:55:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:15.552 00:55:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:15.552 00:55:08 -- host/discovery.sh@59 -- # sort 00:21:15.552 00:55:08 -- host/discovery.sh@59 -- # xargs 00:21:15.552 00:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.552 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:21:15.552 00:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.552 00:55:08 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:15.552 00:55:08 -- host/discovery.sh@98 -- # get_bdev_list 00:21:15.552 00:55:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:15.552 00:55:08 -- host/discovery.sh@55 -- # xargs 00:21:15.552 00:55:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:15.552 00:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.552 00:55:08 -- host/discovery.sh@55 -- # sort 00:21:15.552 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:21:15.552 00:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.552 00:55:08 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:15.552 00:55:08 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:15.552 00:55:08 -- host/discovery.sh@79 -- # expected_count=0 00:21:15.552 00:55:08 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:15.552 00:55:08 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:15.552 00:55:08 -- common/autotest_common.sh@901 -- # local max=10 00:21:15.552 00:55:08 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:15.552 00:55:08 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:15.552 00:55:08 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:15.552 00:55:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:15.552 00:55:08 -- host/discovery.sh@74 -- # jq '. | length' 00:21:15.552 00:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.552 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:21:15.552 00:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.552 00:55:08 -- host/discovery.sh@74 -- # notification_count=0 00:21:15.552 00:55:08 -- host/discovery.sh@75 -- # notify_id=0 00:21:15.552 00:55:08 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:15.552 00:55:08 -- common/autotest_common.sh@904 -- # return 0 00:21:15.552 00:55:08 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:15.552 00:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.552 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:21:15.552 00:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.552 00:55:08 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:15.552 00:55:08 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:15.552 00:55:08 -- common/autotest_common.sh@901 -- # local max=10 00:21:15.552 00:55:08 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:15.553 00:55:08 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:15.553 00:55:08 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:15.553 00:55:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:15.553 00:55:08 -- host/discovery.sh@59 -- # xargs 00:21:15.553 00:55:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:15.553 00:55:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.553 00:55:08 -- host/discovery.sh@59 -- # sort 00:21:15.553 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:21:15.553 00:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.553 00:55:08 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:21:15.553 00:55:08 -- common/autotest_common.sh@906 -- # sleep 1 00:21:16.121 [2024-04-27 00:55:08.708986] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:16.121 [2024-04-27 00:55:08.709006] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:16.121 [2024-04-27 00:55:08.709021] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:16.121 [2024-04-27 00:55:08.797288] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:16.380 [2024-04-27 00:55:09.021915] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:16.380 [2024-04-27 00:55:09.021934] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:16.640 00:55:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:16.640 00:55:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:16.640 00:55:09 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:16.640 00:55:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:16.640 00:55:09 -- host/discovery.sh@59 -- # xargs 00:21:16.640 00:55:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:16.640 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.640 00:55:09 -- host/discovery.sh@59 -- # sort 00:21:16.640 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.640 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.640 00:55:09 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.640 00:55:09 -- common/autotest_common.sh@904 -- # return 0 00:21:16.640 00:55:09 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:16.640 00:55:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:16.640 00:55:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:16.640 00:55:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:16.640 00:55:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:16.640 00:55:09 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:16.640 00:55:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.640 00:55:09 -- host/discovery.sh@55 -- # xargs 00:21:16.640 00:55:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.640 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.640 00:55:09 -- host/discovery.sh@55 -- # sort 00:21:16.640 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.640 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.640 00:55:09 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:16.640 00:55:09 -- common/autotest_common.sh@904 -- # return 0 00:21:16.640 00:55:09 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:16.640 00:55:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:16.640 00:55:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:16.640 00:55:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:16.640 00:55:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:16.899 00:55:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:16.899 00:55:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:16.899 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.899 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.899 00:55:09 -- host/discovery.sh@63 -- # sort -n 00:21:16.899 00:55:09 -- host/discovery.sh@63 -- # xargs 00:21:16.899 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:21:16.899 00:55:09 -- common/autotest_common.sh@904 -- # return 0 00:21:16.899 00:55:09 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:16.899 00:55:09 -- host/discovery.sh@79 -- # expected_count=1 00:21:16.899 00:55:09 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:16.899 00:55:09 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:16.899 00:55:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:16.899 00:55:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:16.899 00:55:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:16.899 00:55:09 -- host/discovery.sh@74 -- # jq '. | length' 00:21:16.899 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.899 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.899 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.899 00:55:09 -- host/discovery.sh@74 -- # notification_count=1 00:21:16.899 00:55:09 -- host/discovery.sh@75 -- # notify_id=1 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:16.899 00:55:09 -- common/autotest_common.sh@904 -- # return 0 00:21:16.899 00:55:09 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:16.899 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.899 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.899 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.899 00:55:09 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:16.899 00:55:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:16.899 00:55:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:16.899 00:55:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:16.899 00:55:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.899 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.899 00:55:09 -- host/discovery.sh@55 -- # sort 00:21:16.899 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.899 00:55:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.899 00:55:09 -- host/discovery.sh@55 -- # xargs 00:21:16.899 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:16.899 00:55:09 -- common/autotest_common.sh@904 -- # return 0 00:21:16.899 00:55:09 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:16.899 00:55:09 -- host/discovery.sh@79 -- # expected_count=1 00:21:16.899 00:55:09 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:16.899 00:55:09 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:16.899 00:55:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:16.899 00:55:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:16.899 00:55:09 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:16.899 00:55:09 -- host/discovery.sh@74 -- # jq '. | length' 00:21:16.899 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.899 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.899 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.899 00:55:09 -- host/discovery.sh@74 -- # notification_count=1 00:21:16.899 00:55:09 -- host/discovery.sh@75 -- # notify_id=2 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:16.899 00:55:09 -- common/autotest_common.sh@904 -- # return 0 00:21:16.899 00:55:09 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:16.899 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.899 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.899 [2024-04-27 00:55:09.525620] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:16.899 [2024-04-27 00:55:09.526246] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:16.899 [2024-04-27 00:55:09.526269] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:16.899 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.899 00:55:09 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:16.899 00:55:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:16.899 00:55:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:16.899 00:55:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:16.899 00:55:09 -- host/discovery.sh@59 -- # xargs 00:21:16.899 00:55:09 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:16.899 00:55:09 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:16.899 00:55:09 -- host/discovery.sh@59 -- # sort 00:21:16.899 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.899 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.899 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.899 00:55:09 -- common/autotest_common.sh@904 -- # return 0 00:21:16.899 00:55:09 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:16.899 00:55:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:16.899 00:55:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:16.899 00:55:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:16.899 00:55:09 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:16.899 00:55:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:16.899 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.899 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:16.899 00:55:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:16.899 00:55:09 -- host/discovery.sh@55 -- # sort 00:21:16.899 00:55:09 -- host/discovery.sh@55 -- # xargs 00:21:17.158 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.158 00:55:09 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:17.158 00:55:09 -- common/autotest_common.sh@904 -- # return 0 00:21:17.158 00:55:09 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:17.158 00:55:09 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:17.158 00:55:09 -- common/autotest_common.sh@901 -- # local max=10 00:21:17.158 00:55:09 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:17.158 00:55:09 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:17.158 00:55:09 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:17.158 00:55:09 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:17.158 00:55:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.158 00:55:09 -- common/autotest_common.sh@10 -- # set +x 00:21:17.158 00:55:09 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:17.158 00:55:09 -- host/discovery.sh@63 -- # sort -n 00:21:17.158 00:55:09 -- host/discovery.sh@63 -- # xargs 00:21:17.158 00:55:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.158 [2024-04-27 00:55:09.654881] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:17.158 00:55:09 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:17.158 00:55:09 -- common/autotest_common.sh@906 -- # sleep 1 00:21:17.158 [2024-04-27 00:55:09.713502] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:17.158 [2024-04-27 00:55:09.713518] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:17.158 [2024-04-27 00:55:09.713523] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:18.096 00:55:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:18.096 00:55:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:18.096 00:55:10 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:18.096 00:55:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:18.096 00:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.096 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:21:18.096 00:55:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:18.096 00:55:10 -- host/discovery.sh@63 -- # sort -n 00:21:18.096 00:55:10 -- host/discovery.sh@63 -- # xargs 00:21:18.096 00:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.096 00:55:10 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:18.096 00:55:10 -- common/autotest_common.sh@904 -- # return 0 00:21:18.096 00:55:10 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:18.096 00:55:10 -- host/discovery.sh@79 -- # expected_count=0 00:21:18.096 00:55:10 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:18.096 00:55:10 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:18.096 00:55:10 -- common/autotest_common.sh@901 -- # local max=10 00:21:18.096 00:55:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:18.096 00:55:10 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:18.096 00:55:10 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:18.096 00:55:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:18.096 00:55:10 -- host/discovery.sh@74 -- # jq '. | length' 00:21:18.096 00:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.096 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:21:18.096 00:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.096 00:55:10 -- host/discovery.sh@74 -- # notification_count=0 00:21:18.096 00:55:10 -- host/discovery.sh@75 -- # notify_id=2 00:21:18.096 00:55:10 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:18.096 00:55:10 -- common/autotest_common.sh@904 -- # return 0 00:21:18.096 00:55:10 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:18.096 00:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.096 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:21:18.096 [2024-04-27 00:55:10.781680] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:18.096 [2024-04-27 00:55:10.781705] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:18.096 [2024-04-27 00:55:10.784248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.096 [2024-04-27 00:55:10.784266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.096 [2024-04-27 00:55:10.784274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.096 [2024-04-27 00:55:10.784283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.096 [2024-04-27 00:55:10.784290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.096 [2024-04-27 00:55:10.784296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.096 [2024-04-27 00:55:10.784303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:18.096 [2024-04-27 00:55:10.784309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:18.096 [2024-04-27 00:55:10.784316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffee30 is same with the state(5) to be set 00:21:18.096 00:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.096 00:55:10 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:18.096 00:55:10 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:18.096 00:55:10 -- common/autotest_common.sh@901 -- # local max=10 00:21:18.096 00:55:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:18.096 00:55:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:18.096 00:55:10 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:18.096 00:55:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:18.096 00:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.096 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:21:18.096 00:55:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:18.357 00:55:10 -- host/discovery.sh@59 -- # sort 00:21:18.357 00:55:10 -- host/discovery.sh@59 -- # xargs 00:21:18.357 [2024-04-27 00:55:10.794262] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffee30 (9): Bad file descriptor 00:21:18.357 00:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.357 [2024-04-27 00:55:10.804301] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:18.357 [2024-04-27 00:55:10.804727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.357 [2024-04-27 00:55:10.804956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.357 [2024-04-27 00:55:10.804968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffee30 with addr=10.0.0.2, port=4420 00:21:18.357 [2024-04-27 00:55:10.804976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffee30 is same with the state(5) to be set 00:21:18.357 [2024-04-27 00:55:10.804988] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffee30 (9): Bad file descriptor 00:21:18.357 [2024-04-27 00:55:10.805005] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:18.357 [2024-04-27 00:55:10.805012] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:18.357 [2024-04-27 00:55:10.805020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:18.357 [2024-04-27 00:55:10.805031] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.357 [2024-04-27 00:55:10.814360] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:18.357 [2024-04-27 00:55:10.814701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.357 [2024-04-27 00:55:10.815013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.357 [2024-04-27 00:55:10.815024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffee30 with addr=10.0.0.2, port=4420 00:21:18.357 [2024-04-27 00:55:10.815032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffee30 is same with the state(5) to be set 00:21:18.357 [2024-04-27 00:55:10.815042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffee30 (9): Bad file descriptor 00:21:18.357 [2024-04-27 00:55:10.815063] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:18.357 [2024-04-27 00:55:10.815076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:18.357 [2024-04-27 00:55:10.815084] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:18.357 [2024-04-27 00:55:10.815094] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.357 [2024-04-27 00:55:10.824410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:18.357 [2024-04-27 00:55:10.824814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.357 [2024-04-27 00:55:10.825198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.357 [2024-04-27 00:55:10.825210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffee30 with addr=10.0.0.2, port=4420 00:21:18.357 [2024-04-27 00:55:10.825218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffee30 is same with the state(5) to be set 00:21:18.357 [2024-04-27 00:55:10.825229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffee30 (9): Bad file descriptor 00:21:18.357 [2024-04-27 00:55:10.825253] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:18.357 [2024-04-27 00:55:10.825261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:18.357 [2024-04-27 00:55:10.825271] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:18.357 [2024-04-27 00:55:10.825281] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.357 00:55:10 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.357 00:55:10 -- common/autotest_common.sh@904 -- # return 0 00:21:18.357 00:55:10 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.357 00:55:10 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:18.357 00:55:10 -- common/autotest_common.sh@901 -- # local max=10 00:21:18.357 [2024-04-27 00:55:10.834462] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:18.357 00:55:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:18.357 [2024-04-27 00:55:10.834800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.357 00:55:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:18.357 [2024-04-27 00:55:10.835128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.358 [2024-04-27 00:55:10.835143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffee30 with addr=10.0.0.2, port=4420 00:21:18.358 [2024-04-27 00:55:10.835151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffee30 is same with the state(5) to be set 00:21:18.358 [2024-04-27 00:55:10.835162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffee30 (9): Bad file descriptor 00:21:18.358 [2024-04-27 00:55:10.835178] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:18.358 [2024-04-27 00:55:10.835186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:18.358 [2024-04-27 00:55:10.835193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:18.358 [2024-04-27 00:55:10.835203] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.358 00:55:10 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:18.358 00:55:10 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.358 00:55:10 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.358 00:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.358 00:55:10 -- host/discovery.sh@55 -- # sort 00:21:18.358 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:21:18.358 00:55:10 -- host/discovery.sh@55 -- # xargs 00:21:18.358 [2024-04-27 00:55:10.844516] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:18.358 [2024-04-27 00:55:10.844734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.358 [2024-04-27 00:55:10.845111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.358 [2024-04-27 00:55:10.845122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffee30 with addr=10.0.0.2, port=4420 00:21:18.358 [2024-04-27 00:55:10.845130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffee30 is same with the state(5) to be set 00:21:18.358 [2024-04-27 00:55:10.845142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffee30 (9): Bad file descriptor 00:21:18.358 [2024-04-27 00:55:10.845158] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:18.358 [2024-04-27 00:55:10.845166] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:18.358 [2024-04-27 00:55:10.845172] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:18.358 [2024-04-27 00:55:10.845183] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.358 [2024-04-27 00:55:10.854571] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:18.358 [2024-04-27 00:55:10.854900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.358 [2024-04-27 00:55:10.855245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.358 [2024-04-27 00:55:10.855257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffee30 with addr=10.0.0.2, port=4420 00:21:18.358 [2024-04-27 00:55:10.855264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffee30 is same with the state(5) to be set 00:21:18.358 [2024-04-27 00:55:10.855275] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffee30 (9): Bad file descriptor 00:21:18.358 [2024-04-27 00:55:10.855297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:18.358 [2024-04-27 00:55:10.855305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:18.358 [2024-04-27 00:55:10.855311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:18.358 [2024-04-27 00:55:10.855321] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.358 [2024-04-27 00:55:10.864620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:18.358 [2024-04-27 00:55:10.864991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.358 [2024-04-27 00:55:10.865366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.358 [2024-04-27 00:55:10.865378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffee30 with addr=10.0.0.2, port=4420 00:21:18.358 [2024-04-27 00:55:10.865386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffee30 is same with the state(5) to be set 00:21:18.358 [2024-04-27 00:55:10.865396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffee30 (9): Bad file descriptor 00:21:18.358 [2024-04-27 00:55:10.865414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:18.358 [2024-04-27 00:55:10.865421] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:18.358 [2024-04-27 00:55:10.865428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:18.358 [2024-04-27 00:55:10.865437] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.358 [2024-04-27 00:55:10.868425] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:18.358 [2024-04-27 00:55:10.868440] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:18.358 00:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.358 00:55:10 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:18.358 00:55:10 -- common/autotest_common.sh@904 -- # return 0 00:21:18.358 00:55:10 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:18.358 00:55:10 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:18.358 00:55:10 -- common/autotest_common.sh@901 -- # local max=10 00:21:18.358 00:55:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:18.358 00:55:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:18.358 00:55:10 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:21:18.358 00:55:10 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:18.358 00:55:10 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:18.358 00:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.358 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:21:18.358 00:55:10 -- host/discovery.sh@63 -- # sort -n 00:21:18.358 00:55:10 -- host/discovery.sh@63 -- # xargs 00:21:18.358 00:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.358 00:55:10 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:21:18.358 00:55:10 -- common/autotest_common.sh@904 -- # return 0 00:21:18.358 00:55:10 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:18.358 00:55:10 -- host/discovery.sh@79 -- # expected_count=0 00:21:18.358 00:55:10 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:18.358 00:55:10 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:18.358 00:55:10 -- common/autotest_common.sh@901 -- # local max=10 00:21:18.358 00:55:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:18.358 00:55:10 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:18.358 00:55:10 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:18.358 00:55:10 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:18.358 00:55:10 -- host/discovery.sh@74 -- # jq '. | length' 00:21:18.358 00:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.358 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:21:18.358 00:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.358 00:55:10 -- host/discovery.sh@74 -- # notification_count=0 00:21:18.358 00:55:10 -- host/discovery.sh@75 -- # notify_id=2 00:21:18.358 00:55:10 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:18.358 00:55:10 -- common/autotest_common.sh@904 -- # return 0 00:21:18.358 00:55:10 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:18.358 00:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.358 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:21:18.358 00:55:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.358 00:55:10 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:18.358 00:55:10 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:18.358 00:55:10 -- common/autotest_common.sh@901 -- # local max=10 00:21:18.358 00:55:10 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:18.358 00:55:10 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:18.358 00:55:10 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:21:18.358 00:55:10 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:18.358 00:55:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.358 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:21:18.358 00:55:10 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:18.358 00:55:11 -- host/discovery.sh@59 -- # sort 00:21:18.358 00:55:11 -- host/discovery.sh@59 -- # xargs 00:21:18.358 00:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.359 00:55:11 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:21:18.359 00:55:11 -- common/autotest_common.sh@904 -- # return 0 00:21:18.359 00:55:11 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:18.359 00:55:11 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:18.359 00:55:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:18.359 00:55:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:18.359 00:55:11 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:18.359 00:55:11 -- common/autotest_common.sh@903 -- # get_bdev_list 00:21:18.359 00:55:11 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:18.359 00:55:11 -- host/discovery.sh@55 -- # xargs 00:21:18.359 00:55:11 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:18.359 00:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.359 00:55:11 -- host/discovery.sh@55 -- # sort 00:21:18.359 00:55:11 -- common/autotest_common.sh@10 -- # set +x 00:21:18.359 00:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.618 00:55:11 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:21:18.618 00:55:11 -- common/autotest_common.sh@904 -- # return 0 00:21:18.618 00:55:11 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:18.618 00:55:11 -- host/discovery.sh@79 -- # expected_count=2 00:21:18.618 00:55:11 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:18.618 00:55:11 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:18.618 00:55:11 -- common/autotest_common.sh@901 -- # local max=10 00:21:18.618 00:55:11 -- common/autotest_common.sh@902 -- # (( max-- )) 00:21:18.618 00:55:11 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:18.618 00:55:11 -- common/autotest_common.sh@903 -- # get_notification_count 00:21:18.618 00:55:11 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:18.618 00:55:11 -- host/discovery.sh@74 -- # jq '. | length' 00:21:18.618 00:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.618 00:55:11 -- common/autotest_common.sh@10 -- # set +x 00:21:18.618 00:55:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.618 00:55:11 -- host/discovery.sh@74 -- # notification_count=2 00:21:18.618 00:55:11 -- host/discovery.sh@75 -- # notify_id=4 00:21:18.618 00:55:11 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:21:18.618 00:55:11 -- common/autotest_common.sh@904 -- # return 0 00:21:18.618 00:55:11 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:18.618 00:55:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.618 00:55:11 -- common/autotest_common.sh@10 -- # set +x 00:21:19.557 [2024-04-27 00:55:12.188222] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:19.557 [2024-04-27 00:55:12.188240] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:19.557 [2024-04-27 00:55:12.188253] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:19.817 [2024-04-27 00:55:12.275514] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:19.817 [2024-04-27 00:55:12.382549] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:19.817 [2024-04-27 00:55:12.382576] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:19.817 00:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.817 00:55:12 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.817 00:55:12 -- common/autotest_common.sh@638 -- # local es=0 00:21:19.817 00:55:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.817 00:55:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:19.817 00:55:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:19.817 00:55:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:19.817 00:55:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:19.817 00:55:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.817 00:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.817 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:21:19.817 request: 00:21:19.817 { 00:21:19.817 "name": "nvme", 00:21:19.817 "trtype": "tcp", 00:21:19.817 "traddr": "10.0.0.2", 00:21:19.817 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:19.817 "adrfam": "ipv4", 00:21:19.817 "trsvcid": "8009", 00:21:19.817 "wait_for_attach": true, 00:21:19.817 "method": "bdev_nvme_start_discovery", 00:21:19.817 "req_id": 1 00:21:19.817 } 00:21:19.817 Got JSON-RPC error response 00:21:19.817 response: 00:21:19.817 { 00:21:19.817 "code": -17, 00:21:19.817 "message": "File exists" 00:21:19.817 } 00:21:19.817 00:55:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:19.817 00:55:12 -- common/autotest_common.sh@641 -- # es=1 00:21:19.817 00:55:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:19.817 00:55:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:19.817 00:55:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:19.817 00:55:12 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:19.817 00:55:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:19.817 00:55:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:19.817 00:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.817 00:55:12 -- host/discovery.sh@67 -- # sort 00:21:19.817 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:21:19.817 00:55:12 -- host/discovery.sh@67 -- # xargs 00:21:19.817 00:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.817 00:55:12 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:19.817 00:55:12 -- host/discovery.sh@146 -- # get_bdev_list 00:21:19.817 00:55:12 -- host/discovery.sh@55 -- # sort 00:21:19.817 00:55:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:19.817 00:55:12 -- host/discovery.sh@55 -- # xargs 00:21:19.817 00:55:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:19.817 00:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.817 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:21:19.817 00:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.817 00:55:12 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:19.817 00:55:12 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.817 00:55:12 -- common/autotest_common.sh@638 -- # local es=0 00:21:19.817 00:55:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.817 00:55:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:19.817 00:55:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:19.817 00:55:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:19.817 00:55:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:19.817 00:55:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:19.817 00:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.817 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:21:19.817 request: 00:21:19.817 { 00:21:19.817 "name": "nvme_second", 00:21:19.817 "trtype": "tcp", 00:21:19.817 "traddr": "10.0.0.2", 00:21:19.817 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:19.817 "adrfam": "ipv4", 00:21:19.817 "trsvcid": "8009", 00:21:19.817 "wait_for_attach": true, 00:21:19.817 "method": "bdev_nvme_start_discovery", 00:21:19.817 "req_id": 1 00:21:19.817 } 00:21:20.077 Got JSON-RPC error response 00:21:20.077 response: 00:21:20.077 { 00:21:20.077 "code": -17, 00:21:20.077 "message": "File exists" 00:21:20.077 } 00:21:20.077 00:55:12 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:20.077 00:55:12 -- common/autotest_common.sh@641 -- # es=1 00:21:20.077 00:55:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:20.077 00:55:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:20.077 00:55:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:20.077 00:55:12 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:20.077 00:55:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:20.077 00:55:12 -- host/discovery.sh@67 -- # xargs 00:21:20.077 00:55:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:20.077 00:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.077 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:21:20.077 00:55:12 -- host/discovery.sh@67 -- # sort 00:21:20.077 00:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.077 00:55:12 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:20.077 00:55:12 -- host/discovery.sh@152 -- # get_bdev_list 00:21:20.077 00:55:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:20.077 00:55:12 -- host/discovery.sh@55 -- # xargs 00:21:20.077 00:55:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:20.077 00:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.077 00:55:12 -- host/discovery.sh@55 -- # sort 00:21:20.077 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:21:20.077 00:55:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:20.077 00:55:12 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:20.077 00:55:12 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:20.077 00:55:12 -- common/autotest_common.sh@638 -- # local es=0 00:21:20.077 00:55:12 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:20.077 00:55:12 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:20.077 00:55:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.077 00:55:12 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:20.077 00:55:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:20.077 00:55:12 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:20.077 00:55:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:20.077 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:21:21.015 [2024-04-27 00:55:13.622182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.015 [2024-04-27 00:55:13.622474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.015 [2024-04-27 00:55:13.622487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002750 with addr=10.0.0.2, port=8010 00:21:21.015 [2024-04-27 00:55:13.622500] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:21.015 [2024-04-27 00:55:13.622507] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:21.015 [2024-04-27 00:55:13.622514] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:21.954 [2024-04-27 00:55:14.624620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.954 [2024-04-27 00:55:14.625111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:21.954 [2024-04-27 00:55:14.625124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202fb60 with addr=10.0.0.2, port=8010 00:21:21.954 [2024-04-27 00:55:14.625135] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:21.954 [2024-04-27 00:55:14.625141] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:21.954 [2024-04-27 00:55:14.625148] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:23.334 [2024-04-27 00:55:15.626631] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:23.334 request: 00:21:23.334 { 00:21:23.334 "name": "nvme_second", 00:21:23.334 "trtype": "tcp", 00:21:23.334 "traddr": "10.0.0.2", 00:21:23.334 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:23.334 "adrfam": "ipv4", 00:21:23.334 "trsvcid": "8010", 00:21:23.334 "attach_timeout_ms": 3000, 00:21:23.334 "method": "bdev_nvme_start_discovery", 00:21:23.334 "req_id": 1 00:21:23.334 } 00:21:23.334 Got JSON-RPC error response 00:21:23.334 response: 00:21:23.334 { 00:21:23.334 "code": -110, 00:21:23.334 "message": "Connection timed out" 00:21:23.334 } 00:21:23.334 00:55:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:23.334 00:55:15 -- common/autotest_common.sh@641 -- # es=1 00:21:23.334 00:55:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:23.334 00:55:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:23.334 00:55:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:23.334 00:55:15 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:23.334 00:55:15 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:23.334 00:55:15 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:23.334 00:55:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.334 00:55:15 -- host/discovery.sh@67 -- # sort 00:21:23.334 00:55:15 -- common/autotest_common.sh@10 -- # set +x 00:21:23.334 00:55:15 -- host/discovery.sh@67 -- # xargs 00:21:23.334 00:55:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.334 00:55:15 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:23.334 00:55:15 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:23.334 00:55:15 -- host/discovery.sh@161 -- # kill 1769399 00:21:23.334 00:55:15 -- host/discovery.sh@162 -- # nvmftestfini 00:21:23.334 00:55:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:23.334 00:55:15 -- nvmf/common.sh@117 -- # sync 00:21:23.334 00:55:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.334 00:55:15 -- nvmf/common.sh@120 -- # set +e 00:21:23.334 00:55:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.334 00:55:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.334 rmmod nvme_tcp 00:21:23.334 rmmod nvme_fabrics 00:21:23.334 rmmod nvme_keyring 00:21:23.334 00:55:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.334 00:55:15 -- nvmf/common.sh@124 -- # set -e 00:21:23.334 00:55:15 -- nvmf/common.sh@125 -- # return 0 00:21:23.334 00:55:15 -- nvmf/common.sh@478 -- # '[' -n 1769216 ']' 00:21:23.334 00:55:15 -- nvmf/common.sh@479 -- # killprocess 1769216 00:21:23.334 00:55:15 -- common/autotest_common.sh@936 -- # '[' -z 1769216 ']' 00:21:23.335 00:55:15 -- common/autotest_common.sh@940 -- # kill -0 1769216 00:21:23.335 00:55:15 -- common/autotest_common.sh@941 -- # uname 00:21:23.335 00:55:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:23.335 00:55:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1769216 00:21:23.335 00:55:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:23.335 00:55:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:23.335 00:55:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1769216' 00:21:23.335 killing process with pid 1769216 00:21:23.335 00:55:15 -- common/autotest_common.sh@955 -- # kill 1769216 00:21:23.335 00:55:15 -- common/autotest_common.sh@960 -- # wait 1769216 00:21:23.335 00:55:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:23.335 00:55:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:23.335 00:55:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:23.335 00:55:15 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.335 00:55:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.335 00:55:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.335 00:55:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.335 00:55:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.873 00:55:18 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:25.873 00:21:25.873 real 0m17.442s 00:21:25.873 user 0m22.093s 00:21:25.873 sys 0m5.120s 00:21:25.873 00:55:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.873 00:55:18 -- common/autotest_common.sh@10 -- # set +x 00:21:25.873 ************************************ 00:21:25.873 END TEST nvmf_discovery 00:21:25.873 ************************************ 00:21:25.873 00:55:18 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:25.873 00:55:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:25.873 00:55:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.873 00:55:18 -- common/autotest_common.sh@10 -- # set +x 00:21:25.873 ************************************ 00:21:25.873 START TEST nvmf_discovery_remove_ifc 00:21:25.873 ************************************ 00:21:25.873 00:55:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:25.873 * Looking for test storage... 00:21:25.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:25.873 00:55:18 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.873 00:55:18 -- nvmf/common.sh@7 -- # uname -s 00:21:25.873 00:55:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.873 00:55:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.873 00:55:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.873 00:55:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.873 00:55:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.873 00:55:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.873 00:55:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.873 00:55:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.873 00:55:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.873 00:55:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.873 00:55:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:25.873 00:55:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:25.873 00:55:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.873 00:55:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.873 00:55:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.873 00:55:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.873 00:55:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.873 00:55:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.873 00:55:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.873 00:55:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.873 00:55:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.873 00:55:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.873 00:55:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.873 00:55:18 -- paths/export.sh@5 -- # export PATH 00:21:25.873 00:55:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.873 00:55:18 -- nvmf/common.sh@47 -- # : 0 00:21:25.873 00:55:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.873 00:55:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.873 00:55:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.873 00:55:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.874 00:55:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.874 00:55:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.874 00:55:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.874 00:55:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.874 00:55:18 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:25.874 00:55:18 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:25.874 00:55:18 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:25.874 00:55:18 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:25.874 00:55:18 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:25.874 00:55:18 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:25.874 00:55:18 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:25.874 00:55:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:25.874 00:55:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.874 00:55:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:25.874 00:55:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:25.874 00:55:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:25.874 00:55:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.874 00:55:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.874 00:55:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.874 00:55:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:25.874 00:55:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:25.874 00:55:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:25.874 00:55:18 -- common/autotest_common.sh@10 -- # set +x 00:21:31.153 00:55:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:31.153 00:55:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.153 00:55:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.153 00:55:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.153 00:55:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.153 00:55:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.153 00:55:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.153 00:55:23 -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.153 00:55:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.153 00:55:23 -- nvmf/common.sh@296 -- # e810=() 00:21:31.153 00:55:23 -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.153 00:55:23 -- nvmf/common.sh@297 -- # x722=() 00:21:31.153 00:55:23 -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.153 00:55:23 -- nvmf/common.sh@298 -- # mlx=() 00:21:31.153 00:55:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.153 00:55:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.153 00:55:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.153 00:55:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:31.153 00:55:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.153 00:55:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.153 00:55:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:31.153 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:31.153 00:55:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.153 00:55:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:31.153 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:31.153 00:55:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.153 00:55:23 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:31.153 00:55:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.153 00:55:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.153 00:55:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:31.153 00:55:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.153 00:55:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:31.153 Found net devices under 0000:86:00.0: cvl_0_0 00:21:31.153 00:55:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.153 00:55:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.153 00:55:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.153 00:55:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:31.153 00:55:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.153 00:55:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:31.153 Found net devices under 0000:86:00.1: cvl_0_1 00:21:31.153 00:55:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.153 00:55:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:31.153 00:55:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:31.154 00:55:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:31.154 00:55:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:31.154 00:55:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:31.154 00:55:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.154 00:55:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.154 00:55:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.154 00:55:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:31.154 00:55:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.154 00:55:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.154 00:55:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:31.154 00:55:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.154 00:55:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.154 00:55:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:31.154 00:55:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:31.154 00:55:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.154 00:55:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.154 00:55:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.154 00:55:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.154 00:55:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:31.154 00:55:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.154 00:55:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.154 00:55:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.154 00:55:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:31.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:21:31.154 00:21:31.154 --- 10.0.0.2 ping statistics --- 00:21:31.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.154 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:21:31.154 00:55:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:21:31.154 00:21:31.154 --- 10.0.0.1 ping statistics --- 00:21:31.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.154 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:21:31.154 00:55:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.154 00:55:23 -- nvmf/common.sh@411 -- # return 0 00:21:31.154 00:55:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:31.154 00:55:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.154 00:55:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:31.154 00:55:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:31.154 00:55:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.154 00:55:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:31.154 00:55:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:31.154 00:55:23 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:21:31.154 00:55:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:31.154 00:55:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:31.154 00:55:23 -- common/autotest_common.sh@10 -- # set +x 00:21:31.154 00:55:23 -- nvmf/common.sh@470 -- # nvmfpid=1774485 00:21:31.154 00:55:23 -- nvmf/common.sh@471 -- # waitforlisten 1774485 00:21:31.154 00:55:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:31.154 00:55:23 -- common/autotest_common.sh@817 -- # '[' -z 1774485 ']' 00:21:31.154 00:55:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.154 00:55:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:31.154 00:55:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.154 00:55:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:31.154 00:55:23 -- common/autotest_common.sh@10 -- # set +x 00:21:31.154 [2024-04-27 00:55:23.707855] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:21:31.154 [2024-04-27 00:55:23.707897] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.154 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.154 [2024-04-27 00:55:23.765791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.154 [2024-04-27 00:55:23.835255] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.154 [2024-04-27 00:55:23.835301] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.154 [2024-04-27 00:55:23.835309] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.154 [2024-04-27 00:55:23.835315] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.154 [2024-04-27 00:55:23.835320] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.154 [2024-04-27 00:55:23.835334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.093 00:55:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:32.093 00:55:24 -- common/autotest_common.sh@850 -- # return 0 00:21:32.093 00:55:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:32.093 00:55:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:32.093 00:55:24 -- common/autotest_common.sh@10 -- # set +x 00:21:32.093 00:55:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.093 00:55:24 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:21:32.093 00:55:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.093 00:55:24 -- common/autotest_common.sh@10 -- # set +x 00:21:32.093 [2024-04-27 00:55:24.542627] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.093 [2024-04-27 00:55:24.550745] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:32.093 null0 00:21:32.094 [2024-04-27 00:55:24.582774] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.094 00:55:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:32.094 00:55:24 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1774567 00:21:32.094 00:55:24 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1774567 /tmp/host.sock 00:21:32.094 00:55:24 -- common/autotest_common.sh@817 -- # '[' -z 1774567 ']' 00:21:32.094 00:55:24 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:21:32.094 00:55:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:21:32.094 00:55:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:32.094 00:55:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:32.094 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:32.094 00:55:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:32.094 00:55:24 -- common/autotest_common.sh@10 -- # set +x 00:21:32.094 [2024-04-27 00:55:24.636272] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:21:32.094 [2024-04-27 00:55:24.636314] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774567 ] 00:21:32.094 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.094 [2024-04-27 00:55:24.688785] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.094 [2024-04-27 00:55:24.765588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.032 00:55:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:33.033 00:55:25 -- common/autotest_common.sh@850 -- # return 0 00:21:33.033 00:55:25 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:33.033 00:55:25 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:21:33.033 00:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.033 00:55:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.033 00:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.033 00:55:25 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:21:33.033 00:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.033 00:55:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.033 00:55:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.033 00:55:25 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:21:33.033 00:55:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.033 00:55:25 -- common/autotest_common.sh@10 -- # set +x 00:21:33.971 [2024-04-27 00:55:26.552018] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:33.971 [2024-04-27 00:55:26.552040] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:33.971 [2024-04-27 00:55:26.552054] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:33.971 [2024-04-27 00:55:26.640326] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:34.231 [2024-04-27 00:55:26.701524] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:34.231 [2024-04-27 00:55:26.701566] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:34.231 [2024-04-27 00:55:26.701586] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:34.231 [2024-04-27 00:55:26.701598] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:34.231 [2024-04-27 00:55:26.701617] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:34.231 00:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:34.231 00:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:34.231 00:55:26 -- common/autotest_common.sh@10 -- # set +x 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:34.231 00:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:21:34.231 [2024-04-27 00:55:26.751880] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24d4770 was disconnected and freed. delete nvme_qpair. 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:34.231 00:55:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:34.231 00:55:26 -- common/autotest_common.sh@10 -- # set +x 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:34.231 00:55:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:34.231 00:55:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:35.610 00:55:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:35.610 00:55:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.610 00:55:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:35.610 00:55:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:35.610 00:55:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.610 00:55:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:35.610 00:55:27 -- common/autotest_common.sh@10 -- # set +x 00:21:35.610 00:55:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.610 00:55:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:35.610 00:55:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:36.548 00:55:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:36.548 00:55:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:36.548 00:55:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:36.548 00:55:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:36.548 00:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.548 00:55:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:36.548 00:55:28 -- common/autotest_common.sh@10 -- # set +x 00:21:36.548 00:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.548 00:55:29 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:36.548 00:55:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:37.525 00:55:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:37.525 00:55:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.525 00:55:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:37.525 00:55:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:37.525 00:55:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:37.525 00:55:30 -- common/autotest_common.sh@10 -- # set +x 00:21:37.525 00:55:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:37.525 00:55:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:37.525 00:55:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:37.525 00:55:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:38.490 00:55:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:38.490 00:55:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:38.490 00:55:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.490 00:55:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:38.490 00:55:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.490 00:55:31 -- common/autotest_common.sh@10 -- # set +x 00:21:38.490 00:55:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:38.490 00:55:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.490 00:55:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:38.490 00:55:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:39.429 00:55:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:39.429 00:55:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:39.429 00:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:39.429 00:55:32 -- common/autotest_common.sh@10 -- # set +x 00:21:39.429 00:55:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:39.429 00:55:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:39.429 00:55:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:39.689 00:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:39.689 [2024-04-27 00:55:32.152763] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:39.689 [2024-04-27 00:55:32.152802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.689 [2024-04-27 00:55:32.152813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.689 [2024-04-27 00:55:32.152822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.689 [2024-04-27 00:55:32.152829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.689 [2024-04-27 00:55:32.152836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.689 [2024-04-27 00:55:32.152842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.689 [2024-04-27 00:55:32.152849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.689 [2024-04-27 00:55:32.152856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.689 [2024-04-27 00:55:32.152863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.689 [2024-04-27 00:55:32.152869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.689 [2024-04-27 00:55:32.152875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aa60 is same with the state(5) to be set 00:21:39.689 [2024-04-27 00:55:32.162784] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aa60 (9): Bad file descriptor 00:21:39.689 00:55:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:39.689 00:55:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:39.689 [2024-04-27 00:55:32.172823] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:40.627 00:55:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:40.627 00:55:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:40.627 00:55:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:40.627 00:55:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:40.627 00:55:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:40.627 00:55:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:40.627 00:55:33 -- common/autotest_common.sh@10 -- # set +x 00:21:40.627 [2024-04-27 00:55:33.180087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:41.564 [2024-04-27 00:55:34.204090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:21:41.564 [2024-04-27 00:55:34.204131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249aa60 with addr=10.0.0.2, port=4420 00:21:41.565 [2024-04-27 00:55:34.204145] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249aa60 is same with the state(5) to be set 00:21:41.565 [2024-04-27 00:55:34.204528] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249aa60 (9): Bad file descriptor 00:21:41.565 [2024-04-27 00:55:34.204554] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:41.565 [2024-04-27 00:55:34.204578] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:21:41.565 [2024-04-27 00:55:34.204601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.565 [2024-04-27 00:55:34.204613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.565 [2024-04-27 00:55:34.204625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.565 [2024-04-27 00:55:34.204635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.565 [2024-04-27 00:55:34.204645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.565 [2024-04-27 00:55:34.204655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.565 [2024-04-27 00:55:34.204665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.565 [2024-04-27 00:55:34.204674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.565 [2024-04-27 00:55:34.204685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.565 [2024-04-27 00:55:34.204695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.565 [2024-04-27 00:55:34.204705] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:41.565 [2024-04-27 00:55:34.205169] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249ae70 (9): Bad file descriptor 00:21:41.565 [2024-04-27 00:55:34.206182] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:41.565 [2024-04-27 00:55:34.206196] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:21:41.565 00:55:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:41.565 00:55:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:41.565 00:55:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:42.944 00:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:42.944 00:55:35 -- common/autotest_common.sh@10 -- # set +x 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:42.944 00:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:42.944 00:55:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:42.944 00:55:35 -- common/autotest_common.sh@10 -- # set +x 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:42.944 00:55:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:42.944 00:55:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:43.883 [2024-04-27 00:55:36.257879] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:43.883 [2024-04-27 00:55:36.257895] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:43.883 [2024-04-27 00:55:36.257909] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:43.883 [2024-04-27 00:55:36.387315] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:21:43.883 00:55:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:43.883 00:55:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.883 00:55:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:43.883 00:55:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:43.883 00:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:43.883 00:55:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:43.883 00:55:36 -- common/autotest_common.sh@10 -- # set +x 00:21:43.883 00:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:43.883 00:55:36 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:43.883 00:55:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:43.883 [2024-04-27 00:55:36.528174] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:43.883 [2024-04-27 00:55:36.528210] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:43.883 [2024-04-27 00:55:36.528227] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:43.883 [2024-04-27 00:55:36.528240] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:21:43.883 [2024-04-27 00:55:36.528247] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:43.883 [2024-04-27 00:55:36.536239] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24d4570 was disconnected and freed. delete nvme_qpair. 00:21:44.821 00:55:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:44.821 00:55:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.821 00:55:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:44.821 00:55:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:44.821 00:55:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:44.821 00:55:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:44.821 00:55:37 -- common/autotest_common.sh@10 -- # set +x 00:21:44.821 00:55:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:45.080 00:55:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:45.080 00:55:37 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:45.080 00:55:37 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1774567 00:21:45.080 00:55:37 -- common/autotest_common.sh@936 -- # '[' -z 1774567 ']' 00:21:45.080 00:55:37 -- common/autotest_common.sh@940 -- # kill -0 1774567 00:21:45.080 00:55:37 -- common/autotest_common.sh@941 -- # uname 00:21:45.080 00:55:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:45.080 00:55:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1774567 00:21:45.080 00:55:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:45.080 00:55:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:45.080 00:55:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1774567' 00:21:45.080 killing process with pid 1774567 00:21:45.080 00:55:37 -- common/autotest_common.sh@955 -- # kill 1774567 00:21:45.080 00:55:37 -- common/autotest_common.sh@960 -- # wait 1774567 00:21:45.080 00:55:37 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:45.080 00:55:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:45.080 00:55:37 -- nvmf/common.sh@117 -- # sync 00:21:45.080 00:55:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.080 00:55:37 -- nvmf/common.sh@120 -- # set +e 00:21:45.080 00:55:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.080 00:55:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.080 rmmod nvme_tcp 00:21:45.339 rmmod nvme_fabrics 00:21:45.339 rmmod nvme_keyring 00:21:45.339 00:55:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.339 00:55:37 -- nvmf/common.sh@124 -- # set -e 00:21:45.339 00:55:37 -- nvmf/common.sh@125 -- # return 0 00:21:45.339 00:55:37 -- nvmf/common.sh@478 -- # '[' -n 1774485 ']' 00:21:45.339 00:55:37 -- nvmf/common.sh@479 -- # killprocess 1774485 00:21:45.339 00:55:37 -- common/autotest_common.sh@936 -- # '[' -z 1774485 ']' 00:21:45.339 00:55:37 -- common/autotest_common.sh@940 -- # kill -0 1774485 00:21:45.339 00:55:37 -- common/autotest_common.sh@941 -- # uname 00:21:45.339 00:55:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:45.339 00:55:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1774485 00:21:45.339 00:55:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:45.339 00:55:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:45.339 00:55:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1774485' 00:21:45.339 killing process with pid 1774485 00:21:45.339 00:55:37 -- common/autotest_common.sh@955 -- # kill 1774485 00:21:45.339 00:55:37 -- common/autotest_common.sh@960 -- # wait 1774485 00:21:45.598 00:55:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:45.598 00:55:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:45.598 00:55:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:45.598 00:55:38 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.598 00:55:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:45.598 00:55:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.598 00:55:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.598 00:55:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.503 00:55:40 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:47.503 00:21:47.503 real 0m21.910s 00:21:47.503 user 0m27.419s 00:21:47.503 sys 0m5.284s 00:21:47.503 00:55:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:47.503 00:55:40 -- common/autotest_common.sh@10 -- # set +x 00:21:47.503 ************************************ 00:21:47.503 END TEST nvmf_discovery_remove_ifc 00:21:47.503 ************************************ 00:21:47.503 00:55:40 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:47.503 00:55:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:47.503 00:55:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:47.503 00:55:40 -- common/autotest_common.sh@10 -- # set +x 00:21:47.761 ************************************ 00:21:47.761 START TEST nvmf_identify_kernel_target 00:21:47.761 ************************************ 00:21:47.761 00:55:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:47.761 * Looking for test storage... 00:21:47.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:47.761 00:55:40 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.761 00:55:40 -- nvmf/common.sh@7 -- # uname -s 00:21:47.761 00:55:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.761 00:55:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.761 00:55:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.761 00:55:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.761 00:55:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.761 00:55:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.761 00:55:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.761 00:55:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.761 00:55:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.761 00:55:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.761 00:55:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.761 00:55:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:47.761 00:55:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.761 00:55:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.761 00:55:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.761 00:55:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.761 00:55:40 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.761 00:55:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.761 00:55:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.761 00:55:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.761 00:55:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.761 00:55:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.761 00:55:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.761 00:55:40 -- paths/export.sh@5 -- # export PATH 00:21:47.761 00:55:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.761 00:55:40 -- nvmf/common.sh@47 -- # : 0 00:21:47.761 00:55:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.761 00:55:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.761 00:55:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.761 00:55:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.761 00:55:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.761 00:55:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.761 00:55:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.761 00:55:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.761 00:55:40 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:47.761 00:55:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:47.761 00:55:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.761 00:55:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:47.761 00:55:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:47.761 00:55:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:47.761 00:55:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.761 00:55:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.761 00:55:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.761 00:55:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:47.761 00:55:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:47.761 00:55:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:47.761 00:55:40 -- common/autotest_common.sh@10 -- # set +x 00:21:53.077 00:55:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:53.077 00:55:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:53.077 00:55:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:53.077 00:55:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:53.077 00:55:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:53.077 00:55:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:53.077 00:55:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:53.077 00:55:44 -- nvmf/common.sh@295 -- # net_devs=() 00:21:53.077 00:55:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:53.077 00:55:44 -- nvmf/common.sh@296 -- # e810=() 00:21:53.077 00:55:44 -- nvmf/common.sh@296 -- # local -ga e810 00:21:53.077 00:55:44 -- nvmf/common.sh@297 -- # x722=() 00:21:53.077 00:55:44 -- nvmf/common.sh@297 -- # local -ga x722 00:21:53.077 00:55:44 -- nvmf/common.sh@298 -- # mlx=() 00:21:53.077 00:55:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:53.077 00:55:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.077 00:55:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:53.077 00:55:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:53.077 00:55:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:53.077 00:55:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.077 00:55:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:53.077 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:53.077 00:55:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:53.077 00:55:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:53.077 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:53.077 00:55:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:53.077 00:55:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.077 00:55:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.077 00:55:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:53.077 00:55:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.077 00:55:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:53.077 Found net devices under 0000:86:00.0: cvl_0_0 00:21:53.077 00:55:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.077 00:55:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:53.077 00:55:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.077 00:55:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:53.077 00:55:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.077 00:55:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:53.077 Found net devices under 0000:86:00.1: cvl_0_1 00:21:53.077 00:55:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.077 00:55:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:53.077 00:55:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:53.077 00:55:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:53.077 00:55:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:53.077 00:55:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.077 00:55:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.077 00:55:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.077 00:55:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:53.077 00:55:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.077 00:55:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.077 00:55:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:53.077 00:55:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.077 00:55:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.077 00:55:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:53.077 00:55:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:53.077 00:55:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.077 00:55:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.077 00:55:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.077 00:55:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.077 00:55:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:53.077 00:55:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.077 00:55:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.077 00:55:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.077 00:55:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:53.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:21:53.077 00:21:53.077 --- 10.0.0.2 ping statistics --- 00:21:53.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.077 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:21:53.077 00:55:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.408 ms 00:21:53.077 00:21:53.077 --- 10.0.0.1 ping statistics --- 00:21:53.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.077 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:21:53.077 00:55:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.077 00:55:45 -- nvmf/common.sh@411 -- # return 0 00:21:53.077 00:55:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:53.077 00:55:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.077 00:55:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:53.077 00:55:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:53.077 00:55:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.077 00:55:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:53.077 00:55:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:53.077 00:55:45 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:53.077 00:55:45 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:53.077 00:55:45 -- nvmf/common.sh@717 -- # local ip 00:21:53.077 00:55:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:21:53.077 00:55:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:21:53.077 00:55:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:53.077 00:55:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:53.077 00:55:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:21:53.077 00:55:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:53.077 00:55:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:21:53.077 00:55:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:21:53.077 00:55:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:21:53.077 00:55:45 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:53.077 00:55:45 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:53.077 00:55:45 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:53.077 00:55:45 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:21:53.078 00:55:45 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:53.078 00:55:45 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:53.078 00:55:45 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:53.078 00:55:45 -- nvmf/common.sh@628 -- # local block nvme 00:21:53.078 00:55:45 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:21:53.078 00:55:45 -- nvmf/common.sh@631 -- # modprobe nvmet 00:21:53.078 00:55:45 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:53.078 00:55:45 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:21:54.980 Waiting for block devices as requested 00:21:54.980 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:21:55.239 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:55.239 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:55.239 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:55.498 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:55.498 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:55.498 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:55.498 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:55.757 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:55.757 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:55.757 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:56.015 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:56.015 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:56.015 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:56.015 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:56.274 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:56.274 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:56.274 00:55:48 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:21:56.274 00:55:48 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:56.274 00:55:48 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:21:56.274 00:55:48 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:21:56.274 00:55:48 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:56.274 00:55:48 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:21:56.274 00:55:48 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:21:56.274 00:55:48 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:56.274 00:55:48 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:21:56.274 No valid GPT data, bailing 00:21:56.274 00:55:48 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:56.274 00:55:48 -- scripts/common.sh@391 -- # pt= 00:21:56.274 00:55:48 -- scripts/common.sh@392 -- # return 1 00:21:56.274 00:55:48 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:21:56.274 00:55:48 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:21:56.274 00:55:48 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:56.274 00:55:48 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:56.533 00:55:48 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:56.533 00:55:48 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:56.533 00:55:48 -- nvmf/common.sh@656 -- # echo 1 00:21:56.533 00:55:48 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:21:56.533 00:55:48 -- nvmf/common.sh@658 -- # echo 1 00:21:56.533 00:55:48 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:21:56.533 00:55:48 -- nvmf/common.sh@661 -- # echo tcp 00:21:56.533 00:55:48 -- nvmf/common.sh@662 -- # echo 4420 00:21:56.533 00:55:48 -- nvmf/common.sh@663 -- # echo ipv4 00:21:56.533 00:55:48 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:56.533 00:55:48 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:21:56.533 00:21:56.533 Discovery Log Number of Records 2, Generation counter 2 00:21:56.533 =====Discovery Log Entry 0====== 00:21:56.533 trtype: tcp 00:21:56.533 adrfam: ipv4 00:21:56.533 subtype: current discovery subsystem 00:21:56.533 treq: not specified, sq flow control disable supported 00:21:56.533 portid: 1 00:21:56.533 trsvcid: 4420 00:21:56.533 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:56.533 traddr: 10.0.0.1 00:21:56.533 eflags: none 00:21:56.533 sectype: none 00:21:56.533 =====Discovery Log Entry 1====== 00:21:56.533 trtype: tcp 00:21:56.533 adrfam: ipv4 00:21:56.533 subtype: nvme subsystem 00:21:56.533 treq: not specified, sq flow control disable supported 00:21:56.533 portid: 1 00:21:56.533 trsvcid: 4420 00:21:56.533 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:56.533 traddr: 10.0.0.1 00:21:56.533 eflags: none 00:21:56.533 sectype: none 00:21:56.533 00:55:49 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:56.533 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:56.533 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.533 ===================================================== 00:21:56.533 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:56.533 ===================================================== 00:21:56.533 Controller Capabilities/Features 00:21:56.533 ================================ 00:21:56.533 Vendor ID: 0000 00:21:56.533 Subsystem Vendor ID: 0000 00:21:56.533 Serial Number: c8db0af217c898d21710 00:21:56.533 Model Number: Linux 00:21:56.533 Firmware Version: 6.7.0-68 00:21:56.533 Recommended Arb Burst: 0 00:21:56.533 IEEE OUI Identifier: 00 00 00 00:21:56.533 Multi-path I/O 00:21:56.533 May have multiple subsystem ports: No 00:21:56.533 May have multiple controllers: No 00:21:56.533 Associated with SR-IOV VF: No 00:21:56.533 Max Data Transfer Size: Unlimited 00:21:56.533 Max Number of Namespaces: 0 00:21:56.533 Max Number of I/O Queues: 1024 00:21:56.533 NVMe Specification Version (VS): 1.3 00:21:56.533 NVMe Specification Version (Identify): 1.3 00:21:56.533 Maximum Queue Entries: 1024 00:21:56.533 Contiguous Queues Required: No 00:21:56.533 Arbitration Mechanisms Supported 00:21:56.533 Weighted Round Robin: Not Supported 00:21:56.533 Vendor Specific: Not Supported 00:21:56.533 Reset Timeout: 7500 ms 00:21:56.533 Doorbell Stride: 4 bytes 00:21:56.533 NVM Subsystem Reset: Not Supported 00:21:56.533 Command Sets Supported 00:21:56.533 NVM Command Set: Supported 00:21:56.533 Boot Partition: Not Supported 00:21:56.533 Memory Page Size Minimum: 4096 bytes 00:21:56.533 Memory Page Size Maximum: 4096 bytes 00:21:56.533 Persistent Memory Region: Not Supported 00:21:56.533 Optional Asynchronous Events Supported 00:21:56.533 Namespace Attribute Notices: Not Supported 00:21:56.533 Firmware Activation Notices: Not Supported 00:21:56.533 ANA Change Notices: Not Supported 00:21:56.533 PLE Aggregate Log Change Notices: Not Supported 00:21:56.533 LBA Status Info Alert Notices: Not Supported 00:21:56.533 EGE Aggregate Log Change Notices: Not Supported 00:21:56.533 Normal NVM Subsystem Shutdown event: Not Supported 00:21:56.533 Zone Descriptor Change Notices: Not Supported 00:21:56.533 Discovery Log Change Notices: Supported 00:21:56.533 Controller Attributes 00:21:56.533 128-bit Host Identifier: Not Supported 00:21:56.533 Non-Operational Permissive Mode: Not Supported 00:21:56.533 NVM Sets: Not Supported 00:21:56.533 Read Recovery Levels: Not Supported 00:21:56.533 Endurance Groups: Not Supported 00:21:56.533 Predictable Latency Mode: Not Supported 00:21:56.533 Traffic Based Keep ALive: Not Supported 00:21:56.533 Namespace Granularity: Not Supported 00:21:56.533 SQ Associations: Not Supported 00:21:56.533 UUID List: Not Supported 00:21:56.533 Multi-Domain Subsystem: Not Supported 00:21:56.533 Fixed Capacity Management: Not Supported 00:21:56.533 Variable Capacity Management: Not Supported 00:21:56.533 Delete Endurance Group: Not Supported 00:21:56.533 Delete NVM Set: Not Supported 00:21:56.533 Extended LBA Formats Supported: Not Supported 00:21:56.533 Flexible Data Placement Supported: Not Supported 00:21:56.533 00:21:56.533 Controller Memory Buffer Support 00:21:56.533 ================================ 00:21:56.533 Supported: No 00:21:56.533 00:21:56.533 Persistent Memory Region Support 00:21:56.533 ================================ 00:21:56.533 Supported: No 00:21:56.533 00:21:56.533 Admin Command Set Attributes 00:21:56.533 ============================ 00:21:56.533 Security Send/Receive: Not Supported 00:21:56.533 Format NVM: Not Supported 00:21:56.533 Firmware Activate/Download: Not Supported 00:21:56.533 Namespace Management: Not Supported 00:21:56.533 Device Self-Test: Not Supported 00:21:56.533 Directives: Not Supported 00:21:56.533 NVMe-MI: Not Supported 00:21:56.533 Virtualization Management: Not Supported 00:21:56.533 Doorbell Buffer Config: Not Supported 00:21:56.533 Get LBA Status Capability: Not Supported 00:21:56.533 Command & Feature Lockdown Capability: Not Supported 00:21:56.533 Abort Command Limit: 1 00:21:56.533 Async Event Request Limit: 1 00:21:56.533 Number of Firmware Slots: N/A 00:21:56.533 Firmware Slot 1 Read-Only: N/A 00:21:56.533 Firmware Activation Without Reset: N/A 00:21:56.533 Multiple Update Detection Support: N/A 00:21:56.533 Firmware Update Granularity: No Information Provided 00:21:56.533 Per-Namespace SMART Log: No 00:21:56.533 Asymmetric Namespace Access Log Page: Not Supported 00:21:56.533 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:56.533 Command Effects Log Page: Not Supported 00:21:56.533 Get Log Page Extended Data: Supported 00:21:56.533 Telemetry Log Pages: Not Supported 00:21:56.533 Persistent Event Log Pages: Not Supported 00:21:56.533 Supported Log Pages Log Page: May Support 00:21:56.533 Commands Supported & Effects Log Page: Not Supported 00:21:56.533 Feature Identifiers & Effects Log Page:May Support 00:21:56.533 NVMe-MI Commands & Effects Log Page: May Support 00:21:56.533 Data Area 4 for Telemetry Log: Not Supported 00:21:56.533 Error Log Page Entries Supported: 1 00:21:56.533 Keep Alive: Not Supported 00:21:56.533 00:21:56.534 NVM Command Set Attributes 00:21:56.534 ========================== 00:21:56.534 Submission Queue Entry Size 00:21:56.534 Max: 1 00:21:56.534 Min: 1 00:21:56.534 Completion Queue Entry Size 00:21:56.534 Max: 1 00:21:56.534 Min: 1 00:21:56.534 Number of Namespaces: 0 00:21:56.534 Compare Command: Not Supported 00:21:56.534 Write Uncorrectable Command: Not Supported 00:21:56.534 Dataset Management Command: Not Supported 00:21:56.534 Write Zeroes Command: Not Supported 00:21:56.534 Set Features Save Field: Not Supported 00:21:56.534 Reservations: Not Supported 00:21:56.534 Timestamp: Not Supported 00:21:56.534 Copy: Not Supported 00:21:56.534 Volatile Write Cache: Not Present 00:21:56.534 Atomic Write Unit (Normal): 1 00:21:56.534 Atomic Write Unit (PFail): 1 00:21:56.534 Atomic Compare & Write Unit: 1 00:21:56.534 Fused Compare & Write: Not Supported 00:21:56.534 Scatter-Gather List 00:21:56.534 SGL Command Set: Supported 00:21:56.534 SGL Keyed: Not Supported 00:21:56.534 SGL Bit Bucket Descriptor: Not Supported 00:21:56.534 SGL Metadata Pointer: Not Supported 00:21:56.534 Oversized SGL: Not Supported 00:21:56.534 SGL Metadata Address: Not Supported 00:21:56.534 SGL Offset: Supported 00:21:56.534 Transport SGL Data Block: Not Supported 00:21:56.534 Replay Protected Memory Block: Not Supported 00:21:56.534 00:21:56.534 Firmware Slot Information 00:21:56.534 ========================= 00:21:56.534 Active slot: 0 00:21:56.534 00:21:56.534 00:21:56.534 Error Log 00:21:56.534 ========= 00:21:56.534 00:21:56.534 Active Namespaces 00:21:56.534 ================= 00:21:56.534 Discovery Log Page 00:21:56.534 ================== 00:21:56.534 Generation Counter: 2 00:21:56.534 Number of Records: 2 00:21:56.534 Record Format: 0 00:21:56.534 00:21:56.534 Discovery Log Entry 0 00:21:56.534 ---------------------- 00:21:56.534 Transport Type: 3 (TCP) 00:21:56.534 Address Family: 1 (IPv4) 00:21:56.534 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:56.534 Entry Flags: 00:21:56.534 Duplicate Returned Information: 0 00:21:56.534 Explicit Persistent Connection Support for Discovery: 0 00:21:56.534 Transport Requirements: 00:21:56.534 Secure Channel: Not Specified 00:21:56.534 Port ID: 1 (0x0001) 00:21:56.534 Controller ID: 65535 (0xffff) 00:21:56.534 Admin Max SQ Size: 32 00:21:56.534 Transport Service Identifier: 4420 00:21:56.534 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:56.534 Transport Address: 10.0.0.1 00:21:56.534 Discovery Log Entry 1 00:21:56.534 ---------------------- 00:21:56.534 Transport Type: 3 (TCP) 00:21:56.534 Address Family: 1 (IPv4) 00:21:56.534 Subsystem Type: 2 (NVM Subsystem) 00:21:56.534 Entry Flags: 00:21:56.534 Duplicate Returned Information: 0 00:21:56.534 Explicit Persistent Connection Support for Discovery: 0 00:21:56.534 Transport Requirements: 00:21:56.534 Secure Channel: Not Specified 00:21:56.534 Port ID: 1 (0x0001) 00:21:56.534 Controller ID: 65535 (0xffff) 00:21:56.534 Admin Max SQ Size: 32 00:21:56.534 Transport Service Identifier: 4420 00:21:56.534 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:56.534 Transport Address: 10.0.0.1 00:21:56.534 00:55:49 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:56.534 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.534 get_feature(0x01) failed 00:21:56.534 get_feature(0x02) failed 00:21:56.534 get_feature(0x04) failed 00:21:56.534 ===================================================== 00:21:56.534 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:56.534 ===================================================== 00:21:56.534 Controller Capabilities/Features 00:21:56.534 ================================ 00:21:56.534 Vendor ID: 0000 00:21:56.534 Subsystem Vendor ID: 0000 00:21:56.534 Serial Number: 62ad8c756fb3df00f7f0 00:21:56.534 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:56.534 Firmware Version: 6.7.0-68 00:21:56.534 Recommended Arb Burst: 6 00:21:56.534 IEEE OUI Identifier: 00 00 00 00:21:56.534 Multi-path I/O 00:21:56.534 May have multiple subsystem ports: Yes 00:21:56.534 May have multiple controllers: Yes 00:21:56.534 Associated with SR-IOV VF: No 00:21:56.534 Max Data Transfer Size: Unlimited 00:21:56.534 Max Number of Namespaces: 1024 00:21:56.534 Max Number of I/O Queues: 128 00:21:56.534 NVMe Specification Version (VS): 1.3 00:21:56.534 NVMe Specification Version (Identify): 1.3 00:21:56.534 Maximum Queue Entries: 1024 00:21:56.534 Contiguous Queues Required: No 00:21:56.534 Arbitration Mechanisms Supported 00:21:56.534 Weighted Round Robin: Not Supported 00:21:56.534 Vendor Specific: Not Supported 00:21:56.534 Reset Timeout: 7500 ms 00:21:56.534 Doorbell Stride: 4 bytes 00:21:56.534 NVM Subsystem Reset: Not Supported 00:21:56.534 Command Sets Supported 00:21:56.534 NVM Command Set: Supported 00:21:56.534 Boot Partition: Not Supported 00:21:56.534 Memory Page Size Minimum: 4096 bytes 00:21:56.534 Memory Page Size Maximum: 4096 bytes 00:21:56.534 Persistent Memory Region: Not Supported 00:21:56.534 Optional Asynchronous Events Supported 00:21:56.534 Namespace Attribute Notices: Supported 00:21:56.534 Firmware Activation Notices: Not Supported 00:21:56.534 ANA Change Notices: Supported 00:21:56.534 PLE Aggregate Log Change Notices: Not Supported 00:21:56.534 LBA Status Info Alert Notices: Not Supported 00:21:56.534 EGE Aggregate Log Change Notices: Not Supported 00:21:56.534 Normal NVM Subsystem Shutdown event: Not Supported 00:21:56.534 Zone Descriptor Change Notices: Not Supported 00:21:56.534 Discovery Log Change Notices: Not Supported 00:21:56.534 Controller Attributes 00:21:56.534 128-bit Host Identifier: Supported 00:21:56.534 Non-Operational Permissive Mode: Not Supported 00:21:56.534 NVM Sets: Not Supported 00:21:56.534 Read Recovery Levels: Not Supported 00:21:56.534 Endurance Groups: Not Supported 00:21:56.534 Predictable Latency Mode: Not Supported 00:21:56.534 Traffic Based Keep ALive: Supported 00:21:56.534 Namespace Granularity: Not Supported 00:21:56.534 SQ Associations: Not Supported 00:21:56.534 UUID List: Not Supported 00:21:56.534 Multi-Domain Subsystem: Not Supported 00:21:56.534 Fixed Capacity Management: Not Supported 00:21:56.534 Variable Capacity Management: Not Supported 00:21:56.534 Delete Endurance Group: Not Supported 00:21:56.534 Delete NVM Set: Not Supported 00:21:56.534 Extended LBA Formats Supported: Not Supported 00:21:56.534 Flexible Data Placement Supported: Not Supported 00:21:56.534 00:21:56.534 Controller Memory Buffer Support 00:21:56.534 ================================ 00:21:56.534 Supported: No 00:21:56.534 00:21:56.534 Persistent Memory Region Support 00:21:56.534 ================================ 00:21:56.534 Supported: No 00:21:56.534 00:21:56.534 Admin Command Set Attributes 00:21:56.534 ============================ 00:21:56.534 Security Send/Receive: Not Supported 00:21:56.534 Format NVM: Not Supported 00:21:56.534 Firmware Activate/Download: Not Supported 00:21:56.534 Namespace Management: Not Supported 00:21:56.534 Device Self-Test: Not Supported 00:21:56.534 Directives: Not Supported 00:21:56.534 NVMe-MI: Not Supported 00:21:56.534 Virtualization Management: Not Supported 00:21:56.534 Doorbell Buffer Config: Not Supported 00:21:56.534 Get LBA Status Capability: Not Supported 00:21:56.534 Command & Feature Lockdown Capability: Not Supported 00:21:56.534 Abort Command Limit: 4 00:21:56.534 Async Event Request Limit: 4 00:21:56.534 Number of Firmware Slots: N/A 00:21:56.534 Firmware Slot 1 Read-Only: N/A 00:21:56.534 Firmware Activation Without Reset: N/A 00:21:56.534 Multiple Update Detection Support: N/A 00:21:56.534 Firmware Update Granularity: No Information Provided 00:21:56.534 Per-Namespace SMART Log: Yes 00:21:56.534 Asymmetric Namespace Access Log Page: Supported 00:21:56.534 ANA Transition Time : 10 sec 00:21:56.534 00:21:56.534 Asymmetric Namespace Access Capabilities 00:21:56.534 ANA Optimized State : Supported 00:21:56.534 ANA Non-Optimized State : Supported 00:21:56.534 ANA Inaccessible State : Supported 00:21:56.534 ANA Persistent Loss State : Supported 00:21:56.534 ANA Change State : Supported 00:21:56.534 ANAGRPID is not changed : No 00:21:56.534 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:56.534 00:21:56.534 ANA Group Identifier Maximum : 128 00:21:56.534 Number of ANA Group Identifiers : 128 00:21:56.534 Max Number of Allowed Namespaces : 1024 00:21:56.534 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:56.534 Command Effects Log Page: Supported 00:21:56.534 Get Log Page Extended Data: Supported 00:21:56.534 Telemetry Log Pages: Not Supported 00:21:56.534 Persistent Event Log Pages: Not Supported 00:21:56.534 Supported Log Pages Log Page: May Support 00:21:56.534 Commands Supported & Effects Log Page: Not Supported 00:21:56.534 Feature Identifiers & Effects Log Page:May Support 00:21:56.534 NVMe-MI Commands & Effects Log Page: May Support 00:21:56.534 Data Area 4 for Telemetry Log: Not Supported 00:21:56.534 Error Log Page Entries Supported: 128 00:21:56.534 Keep Alive: Supported 00:21:56.534 Keep Alive Granularity: 1000 ms 00:21:56.534 00:21:56.534 NVM Command Set Attributes 00:21:56.534 ========================== 00:21:56.534 Submission Queue Entry Size 00:21:56.534 Max: 64 00:21:56.534 Min: 64 00:21:56.534 Completion Queue Entry Size 00:21:56.534 Max: 16 00:21:56.534 Min: 16 00:21:56.534 Number of Namespaces: 1024 00:21:56.534 Compare Command: Not Supported 00:21:56.534 Write Uncorrectable Command: Not Supported 00:21:56.534 Dataset Management Command: Supported 00:21:56.534 Write Zeroes Command: Supported 00:21:56.534 Set Features Save Field: Not Supported 00:21:56.534 Reservations: Not Supported 00:21:56.534 Timestamp: Not Supported 00:21:56.534 Copy: Not Supported 00:21:56.534 Volatile Write Cache: Present 00:21:56.534 Atomic Write Unit (Normal): 1 00:21:56.534 Atomic Write Unit (PFail): 1 00:21:56.534 Atomic Compare & Write Unit: 1 00:21:56.534 Fused Compare & Write: Not Supported 00:21:56.534 Scatter-Gather List 00:21:56.534 SGL Command Set: Supported 00:21:56.534 SGL Keyed: Not Supported 00:21:56.534 SGL Bit Bucket Descriptor: Not Supported 00:21:56.534 SGL Metadata Pointer: Not Supported 00:21:56.534 Oversized SGL: Not Supported 00:21:56.534 SGL Metadata Address: Not Supported 00:21:56.534 SGL Offset: Supported 00:21:56.534 Transport SGL Data Block: Not Supported 00:21:56.534 Replay Protected Memory Block: Not Supported 00:21:56.534 00:21:56.534 Firmware Slot Information 00:21:56.534 ========================= 00:21:56.534 Active slot: 0 00:21:56.534 00:21:56.534 Asymmetric Namespace Access 00:21:56.534 =========================== 00:21:56.534 Change Count : 0 00:21:56.534 Number of ANA Group Descriptors : 1 00:21:56.534 ANA Group Descriptor : 0 00:21:56.534 ANA Group ID : 1 00:21:56.534 Number of NSID Values : 1 00:21:56.534 Change Count : 0 00:21:56.534 ANA State : 1 00:21:56.534 Namespace Identifier : 1 00:21:56.534 00:21:56.534 Commands Supported and Effects 00:21:56.534 ============================== 00:21:56.534 Admin Commands 00:21:56.534 -------------- 00:21:56.534 Get Log Page (02h): Supported 00:21:56.534 Identify (06h): Supported 00:21:56.534 Abort (08h): Supported 00:21:56.534 Set Features (09h): Supported 00:21:56.534 Get Features (0Ah): Supported 00:21:56.534 Asynchronous Event Request (0Ch): Supported 00:21:56.534 Keep Alive (18h): Supported 00:21:56.534 I/O Commands 00:21:56.534 ------------ 00:21:56.534 Flush (00h): Supported 00:21:56.534 Write (01h): Supported LBA-Change 00:21:56.534 Read (02h): Supported 00:21:56.534 Write Zeroes (08h): Supported LBA-Change 00:21:56.534 Dataset Management (09h): Supported 00:21:56.534 00:21:56.534 Error Log 00:21:56.534 ========= 00:21:56.534 Entry: 0 00:21:56.534 Error Count: 0x3 00:21:56.534 Submission Queue Id: 0x0 00:21:56.534 Command Id: 0x5 00:21:56.534 Phase Bit: 0 00:21:56.534 Status Code: 0x2 00:21:56.534 Status Code Type: 0x0 00:21:56.534 Do Not Retry: 1 00:21:56.534 Error Location: 0x28 00:21:56.534 LBA: 0x0 00:21:56.534 Namespace: 0x0 00:21:56.534 Vendor Log Page: 0x0 00:21:56.534 ----------- 00:21:56.534 Entry: 1 00:21:56.534 Error Count: 0x2 00:21:56.534 Submission Queue Id: 0x0 00:21:56.534 Command Id: 0x5 00:21:56.534 Phase Bit: 0 00:21:56.534 Status Code: 0x2 00:21:56.534 Status Code Type: 0x0 00:21:56.534 Do Not Retry: 1 00:21:56.534 Error Location: 0x28 00:21:56.534 LBA: 0x0 00:21:56.534 Namespace: 0x0 00:21:56.534 Vendor Log Page: 0x0 00:21:56.534 ----------- 00:21:56.534 Entry: 2 00:21:56.534 Error Count: 0x1 00:21:56.534 Submission Queue Id: 0x0 00:21:56.534 Command Id: 0x4 00:21:56.534 Phase Bit: 0 00:21:56.534 Status Code: 0x2 00:21:56.534 Status Code Type: 0x0 00:21:56.534 Do Not Retry: 1 00:21:56.534 Error Location: 0x28 00:21:56.534 LBA: 0x0 00:21:56.534 Namespace: 0x0 00:21:56.534 Vendor Log Page: 0x0 00:21:56.534 00:21:56.534 Number of Queues 00:21:56.534 ================ 00:21:56.534 Number of I/O Submission Queues: 128 00:21:56.534 Number of I/O Completion Queues: 128 00:21:56.534 00:21:56.534 ZNS Specific Controller Data 00:21:56.534 ============================ 00:21:56.534 Zone Append Size Limit: 0 00:21:56.534 00:21:56.534 00:21:56.534 Active Namespaces 00:21:56.534 ================= 00:21:56.534 get_feature(0x05) failed 00:21:56.534 Namespace ID:1 00:21:56.534 Command Set Identifier: NVM (00h) 00:21:56.534 Deallocate: Supported 00:21:56.534 Deallocated/Unwritten Error: Not Supported 00:21:56.534 Deallocated Read Value: Unknown 00:21:56.534 Deallocate in Write Zeroes: Not Supported 00:21:56.534 Deallocated Guard Field: 0xFFFF 00:21:56.534 Flush: Supported 00:21:56.534 Reservation: Not Supported 00:21:56.534 Namespace Sharing Capabilities: Multiple Controllers 00:21:56.534 Size (in LBAs): 1953525168 (931GiB) 00:21:56.534 Capacity (in LBAs): 1953525168 (931GiB) 00:21:56.534 Utilization (in LBAs): 1953525168 (931GiB) 00:21:56.534 UUID: 402c303c-5945-4bc9-b0ce-28a576c7fff1 00:21:56.534 Thin Provisioning: Not Supported 00:21:56.534 Per-NS Atomic Units: Yes 00:21:56.534 Atomic Boundary Size (Normal): 0 00:21:56.534 Atomic Boundary Size (PFail): 0 00:21:56.534 Atomic Boundary Offset: 0 00:21:56.534 NGUID/EUI64 Never Reused: No 00:21:56.534 ANA group ID: 1 00:21:56.534 Namespace Write Protected: No 00:21:56.534 Number of LBA Formats: 1 00:21:56.534 Current LBA Format: LBA Format #00 00:21:56.534 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:56.534 00:21:56.534 00:55:49 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:56.534 00:55:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:56.534 00:55:49 -- nvmf/common.sh@117 -- # sync 00:21:56.534 00:55:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.534 00:55:49 -- nvmf/common.sh@120 -- # set +e 00:21:56.534 00:55:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.534 00:55:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.534 rmmod nvme_tcp 00:21:56.534 rmmod nvme_fabrics 00:21:56.794 00:55:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.794 00:55:49 -- nvmf/common.sh@124 -- # set -e 00:21:56.794 00:55:49 -- nvmf/common.sh@125 -- # return 0 00:21:56.795 00:55:49 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:21:56.795 00:55:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:56.795 00:55:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:56.795 00:55:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:56.795 00:55:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:56.795 00:55:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:56.795 00:55:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.795 00:55:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.795 00:55:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.704 00:55:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:58.704 00:55:51 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:58.704 00:55:51 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:58.704 00:55:51 -- nvmf/common.sh@675 -- # echo 0 00:21:58.704 00:55:51 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:58.704 00:55:51 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:58.704 00:55:51 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:58.704 00:55:51 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:58.704 00:55:51 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:21:58.704 00:55:51 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:21:58.704 00:55:51 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:01.240 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:01.240 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:01.241 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:01.241 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:02.177 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:22:02.177 00:22:02.177 real 0m14.556s 00:22:02.177 user 0m3.189s 00:22:02.177 sys 0m7.441s 00:22:02.177 00:55:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:02.177 00:55:54 -- common/autotest_common.sh@10 -- # set +x 00:22:02.177 ************************************ 00:22:02.177 END TEST nvmf_identify_kernel_target 00:22:02.177 ************************************ 00:22:02.436 00:55:54 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:02.436 00:55:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:02.436 00:55:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:02.436 00:55:54 -- common/autotest_common.sh@10 -- # set +x 00:22:02.436 ************************************ 00:22:02.436 START TEST nvmf_auth 00:22:02.436 ************************************ 00:22:02.436 00:55:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:02.436 * Looking for test storage... 00:22:02.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.436 00:55:55 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.436 00:55:55 -- nvmf/common.sh@7 -- # uname -s 00:22:02.436 00:55:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.436 00:55:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.436 00:55:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.436 00:55:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.436 00:55:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.436 00:55:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.436 00:55:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.436 00:55:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.436 00:55:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.436 00:55:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.436 00:55:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.436 00:55:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.436 00:55:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.436 00:55:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.436 00:55:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.436 00:55:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.436 00:55:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.436 00:55:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.436 00:55:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.436 00:55:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.436 00:55:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.436 00:55:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.436 00:55:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.436 00:55:55 -- paths/export.sh@5 -- # export PATH 00:22:02.436 00:55:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.436 00:55:55 -- nvmf/common.sh@47 -- # : 0 00:22:02.436 00:55:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.436 00:55:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.436 00:55:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.436 00:55:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.436 00:55:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.436 00:55:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.436 00:55:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.436 00:55:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.696 00:55:55 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:02.696 00:55:55 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:02.696 00:55:55 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:02.696 00:55:55 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:02.696 00:55:55 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:02.696 00:55:55 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:02.696 00:55:55 -- host/auth.sh@21 -- # keys=() 00:22:02.696 00:55:55 -- host/auth.sh@77 -- # nvmftestinit 00:22:02.696 00:55:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:02.696 00:55:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.696 00:55:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:02.696 00:55:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:02.696 00:55:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:02.696 00:55:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.696 00:55:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.696 00:55:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.696 00:55:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:02.696 00:55:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:02.696 00:55:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:02.696 00:55:55 -- common/autotest_common.sh@10 -- # set +x 00:22:07.967 00:56:00 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:07.967 00:56:00 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.967 00:56:00 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.967 00:56:00 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.967 00:56:00 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.967 00:56:00 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.967 00:56:00 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.967 00:56:00 -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.967 00:56:00 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.967 00:56:00 -- nvmf/common.sh@296 -- # e810=() 00:22:07.967 00:56:00 -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.967 00:56:00 -- nvmf/common.sh@297 -- # x722=() 00:22:07.967 00:56:00 -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.967 00:56:00 -- nvmf/common.sh@298 -- # mlx=() 00:22:07.967 00:56:00 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.967 00:56:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.967 00:56:00 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.967 00:56:00 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.967 00:56:00 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.967 00:56:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.967 00:56:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:07.967 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:07.967 00:56:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.967 00:56:00 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:07.967 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:07.967 00:56:00 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.967 00:56:00 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.967 00:56:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.967 00:56:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.967 00:56:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:07.967 00:56:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.967 00:56:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:07.967 Found net devices under 0000:86:00.0: cvl_0_0 00:22:07.967 00:56:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.967 00:56:00 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.967 00:56:00 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.967 00:56:00 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:07.967 00:56:00 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.967 00:56:00 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:07.967 Found net devices under 0000:86:00.1: cvl_0_1 00:22:07.967 00:56:00 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.968 00:56:00 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:07.968 00:56:00 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:07.968 00:56:00 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:07.968 00:56:00 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:07.968 00:56:00 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:07.968 00:56:00 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.968 00:56:00 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.968 00:56:00 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.968 00:56:00 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:07.968 00:56:00 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.968 00:56:00 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.968 00:56:00 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:07.968 00:56:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.968 00:56:00 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.968 00:56:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:07.968 00:56:00 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:07.968 00:56:00 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.968 00:56:00 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.968 00:56:00 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.968 00:56:00 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.968 00:56:00 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:07.968 00:56:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.227 00:56:00 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.227 00:56:00 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.227 00:56:00 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:08.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:22:08.227 00:22:08.227 --- 10.0.0.2 ping statistics --- 00:22:08.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.227 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:22:08.227 00:56:00 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:22:08.227 00:22:08.227 --- 10.0.0.1 ping statistics --- 00:22:08.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.227 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:22:08.227 00:56:00 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.227 00:56:00 -- nvmf/common.sh@411 -- # return 0 00:22:08.227 00:56:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:08.227 00:56:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.227 00:56:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:08.227 00:56:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:08.227 00:56:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.227 00:56:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:08.227 00:56:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:08.227 00:56:00 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:22:08.227 00:56:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:08.227 00:56:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:08.227 00:56:00 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 00:56:00 -- nvmf/common.sh@470 -- # nvmfpid=1786545 00:22:08.227 00:56:00 -- nvmf/common.sh@471 -- # waitforlisten 1786545 00:22:08.227 00:56:00 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:08.227 00:56:00 -- common/autotest_common.sh@817 -- # '[' -z 1786545 ']' 00:22:08.227 00:56:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.227 00:56:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:08.227 00:56:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.227 00:56:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:08.227 00:56:00 -- common/autotest_common.sh@10 -- # set +x 00:22:09.160 00:56:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:09.160 00:56:01 -- common/autotest_common.sh@850 -- # return 0 00:22:09.160 00:56:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:09.160 00:56:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:09.160 00:56:01 -- common/autotest_common.sh@10 -- # set +x 00:22:09.160 00:56:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.160 00:56:01 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:09.160 00:56:01 -- host/auth.sh@81 -- # gen_key null 32 00:22:09.160 00:56:01 -- host/auth.sh@53 -- # local digest len file key 00:22:09.160 00:56:01 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.160 00:56:01 -- host/auth.sh@54 -- # local -A digests 00:22:09.160 00:56:01 -- host/auth.sh@56 -- # digest=null 00:22:09.160 00:56:01 -- host/auth.sh@56 -- # len=32 00:22:09.160 00:56:01 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:09.160 00:56:01 -- host/auth.sh@57 -- # key=def2d0465a5ff040410edaf2ab3ef77e 00:22:09.160 00:56:01 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:22:09.160 00:56:01 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.rXq 00:22:09.160 00:56:01 -- host/auth.sh@59 -- # format_dhchap_key def2d0465a5ff040410edaf2ab3ef77e 0 00:22:09.160 00:56:01 -- nvmf/common.sh@708 -- # format_key DHHC-1 def2d0465a5ff040410edaf2ab3ef77e 0 00:22:09.160 00:56:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:09.160 00:56:01 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:09.160 00:56:01 -- nvmf/common.sh@693 -- # key=def2d0465a5ff040410edaf2ab3ef77e 00:22:09.160 00:56:01 -- nvmf/common.sh@693 -- # digest=0 00:22:09.160 00:56:01 -- nvmf/common.sh@694 -- # python - 00:22:09.160 00:56:01 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.rXq 00:22:09.160 00:56:01 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.rXq 00:22:09.160 00:56:01 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.rXq 00:22:09.160 00:56:01 -- host/auth.sh@82 -- # gen_key null 48 00:22:09.160 00:56:01 -- host/auth.sh@53 -- # local digest len file key 00:22:09.160 00:56:01 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.160 00:56:01 -- host/auth.sh@54 -- # local -A digests 00:22:09.160 00:56:01 -- host/auth.sh@56 -- # digest=null 00:22:09.160 00:56:01 -- host/auth.sh@56 -- # len=48 00:22:09.160 00:56:01 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:09.160 00:56:01 -- host/auth.sh@57 -- # key=4b0cc81bc792b2f0584a9eebe7c26f9f7eb3df0fe1225cdb 00:22:09.160 00:56:01 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:22:09.160 00:56:01 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.X0F 00:22:09.160 00:56:01 -- host/auth.sh@59 -- # format_dhchap_key 4b0cc81bc792b2f0584a9eebe7c26f9f7eb3df0fe1225cdb 0 00:22:09.160 00:56:01 -- nvmf/common.sh@708 -- # format_key DHHC-1 4b0cc81bc792b2f0584a9eebe7c26f9f7eb3df0fe1225cdb 0 00:22:09.160 00:56:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:09.160 00:56:01 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:09.160 00:56:01 -- nvmf/common.sh@693 -- # key=4b0cc81bc792b2f0584a9eebe7c26f9f7eb3df0fe1225cdb 00:22:09.160 00:56:01 -- nvmf/common.sh@693 -- # digest=0 00:22:09.160 00:56:01 -- nvmf/common.sh@694 -- # python - 00:22:09.160 00:56:01 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.X0F 00:22:09.160 00:56:01 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.X0F 00:22:09.160 00:56:01 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.X0F 00:22:09.160 00:56:01 -- host/auth.sh@83 -- # gen_key sha256 32 00:22:09.160 00:56:01 -- host/auth.sh@53 -- # local digest len file key 00:22:09.160 00:56:01 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.160 00:56:01 -- host/auth.sh@54 -- # local -A digests 00:22:09.160 00:56:01 -- host/auth.sh@56 -- # digest=sha256 00:22:09.160 00:56:01 -- host/auth.sh@56 -- # len=32 00:22:09.160 00:56:01 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:09.160 00:56:01 -- host/auth.sh@57 -- # key=141b5f670b3216112388947532e3574e 00:22:09.160 00:56:01 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:22:09.160 00:56:01 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.EYc 00:22:09.160 00:56:01 -- host/auth.sh@59 -- # format_dhchap_key 141b5f670b3216112388947532e3574e 1 00:22:09.160 00:56:01 -- nvmf/common.sh@708 -- # format_key DHHC-1 141b5f670b3216112388947532e3574e 1 00:22:09.160 00:56:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:09.160 00:56:01 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:09.160 00:56:01 -- nvmf/common.sh@693 -- # key=141b5f670b3216112388947532e3574e 00:22:09.160 00:56:01 -- nvmf/common.sh@693 -- # digest=1 00:22:09.160 00:56:01 -- nvmf/common.sh@694 -- # python - 00:22:09.418 00:56:01 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.EYc 00:22:09.418 00:56:01 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.EYc 00:22:09.418 00:56:01 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.EYc 00:22:09.418 00:56:01 -- host/auth.sh@84 -- # gen_key sha384 48 00:22:09.418 00:56:01 -- host/auth.sh@53 -- # local digest len file key 00:22:09.418 00:56:01 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.418 00:56:01 -- host/auth.sh@54 -- # local -A digests 00:22:09.418 00:56:01 -- host/auth.sh@56 -- # digest=sha384 00:22:09.418 00:56:01 -- host/auth.sh@56 -- # len=48 00:22:09.418 00:56:01 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:09.418 00:56:01 -- host/auth.sh@57 -- # key=4176bb601b941df9bdde1611c77da8a2d70b57cc232c9d71 00:22:09.418 00:56:01 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:22:09.418 00:56:01 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.NDZ 00:22:09.418 00:56:01 -- host/auth.sh@59 -- # format_dhchap_key 4176bb601b941df9bdde1611c77da8a2d70b57cc232c9d71 2 00:22:09.418 00:56:01 -- nvmf/common.sh@708 -- # format_key DHHC-1 4176bb601b941df9bdde1611c77da8a2d70b57cc232c9d71 2 00:22:09.418 00:56:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:09.418 00:56:01 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:09.418 00:56:01 -- nvmf/common.sh@693 -- # key=4176bb601b941df9bdde1611c77da8a2d70b57cc232c9d71 00:22:09.419 00:56:01 -- nvmf/common.sh@693 -- # digest=2 00:22:09.419 00:56:01 -- nvmf/common.sh@694 -- # python - 00:22:09.419 00:56:01 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.NDZ 00:22:09.419 00:56:01 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.NDZ 00:22:09.419 00:56:01 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.NDZ 00:22:09.419 00:56:01 -- host/auth.sh@85 -- # gen_key sha512 64 00:22:09.419 00:56:01 -- host/auth.sh@53 -- # local digest len file key 00:22:09.419 00:56:01 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:09.419 00:56:01 -- host/auth.sh@54 -- # local -A digests 00:22:09.419 00:56:01 -- host/auth.sh@56 -- # digest=sha512 00:22:09.419 00:56:01 -- host/auth.sh@56 -- # len=64 00:22:09.419 00:56:01 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:09.419 00:56:01 -- host/auth.sh@57 -- # key=74e40796f43226b08a49c7b24f3ca09e5fb2517f98ab48a4a4025ef2bd302e84 00:22:09.419 00:56:01 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:22:09.419 00:56:01 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.a60 00:22:09.419 00:56:01 -- host/auth.sh@59 -- # format_dhchap_key 74e40796f43226b08a49c7b24f3ca09e5fb2517f98ab48a4a4025ef2bd302e84 3 00:22:09.419 00:56:01 -- nvmf/common.sh@708 -- # format_key DHHC-1 74e40796f43226b08a49c7b24f3ca09e5fb2517f98ab48a4a4025ef2bd302e84 3 00:22:09.419 00:56:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:09.419 00:56:01 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:09.419 00:56:01 -- nvmf/common.sh@693 -- # key=74e40796f43226b08a49c7b24f3ca09e5fb2517f98ab48a4a4025ef2bd302e84 00:22:09.419 00:56:01 -- nvmf/common.sh@693 -- # digest=3 00:22:09.419 00:56:01 -- nvmf/common.sh@694 -- # python - 00:22:09.419 00:56:01 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.a60 00:22:09.419 00:56:01 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.a60 00:22:09.419 00:56:01 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.a60 00:22:09.419 00:56:01 -- host/auth.sh@87 -- # waitforlisten 1786545 00:22:09.419 00:56:01 -- common/autotest_common.sh@817 -- # '[' -z 1786545 ']' 00:22:09.419 00:56:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.419 00:56:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:09.419 00:56:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.419 00:56:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:09.419 00:56:01 -- common/autotest_common.sh@10 -- # set +x 00:22:09.677 00:56:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:09.677 00:56:02 -- common/autotest_common.sh@850 -- # return 0 00:22:09.677 00:56:02 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:09.677 00:56:02 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rXq 00:22:09.677 00:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.677 00:56:02 -- common/autotest_common.sh@10 -- # set +x 00:22:09.677 00:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.677 00:56:02 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:09.677 00:56:02 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.X0F 00:22:09.677 00:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.677 00:56:02 -- common/autotest_common.sh@10 -- # set +x 00:22:09.677 00:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.677 00:56:02 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:09.677 00:56:02 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.EYc 00:22:09.677 00:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.677 00:56:02 -- common/autotest_common.sh@10 -- # set +x 00:22:09.677 00:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.677 00:56:02 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:09.677 00:56:02 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.NDZ 00:22:09.677 00:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.677 00:56:02 -- common/autotest_common.sh@10 -- # set +x 00:22:09.677 00:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.677 00:56:02 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:09.677 00:56:02 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.a60 00:22:09.677 00:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:09.677 00:56:02 -- common/autotest_common.sh@10 -- # set +x 00:22:09.677 00:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:09.677 00:56:02 -- host/auth.sh@92 -- # nvmet_auth_init 00:22:09.677 00:56:02 -- host/auth.sh@35 -- # get_main_ns_ip 00:22:09.677 00:56:02 -- nvmf/common.sh@717 -- # local ip 00:22:09.677 00:56:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:09.677 00:56:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:09.677 00:56:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:09.677 00:56:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:09.677 00:56:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:09.677 00:56:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:09.677 00:56:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:09.677 00:56:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:09.677 00:56:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:09.677 00:56:02 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:09.677 00:56:02 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:09.678 00:56:02 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:09.678 00:56:02 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:09.678 00:56:02 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:09.678 00:56:02 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:09.678 00:56:02 -- nvmf/common.sh@628 -- # local block nvme 00:22:09.678 00:56:02 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:09.678 00:56:02 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:09.678 00:56:02 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:09.678 00:56:02 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:12.205 Waiting for block devices as requested 00:22:12.205 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:22:12.463 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:12.463 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:12.463 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:12.463 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:12.721 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:12.721 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:12.721 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:12.721 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:12.979 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:12.979 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:12.979 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:13.237 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:13.237 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:13.237 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:13.237 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:13.511 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:14.142 00:56:06 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:14.142 00:56:06 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:14.142 00:56:06 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:14.142 00:56:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:14.142 00:56:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:14.142 00:56:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:14.142 00:56:06 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:14.142 00:56:06 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:14.142 00:56:06 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:14.142 No valid GPT data, bailing 00:22:14.142 00:56:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:14.142 00:56:06 -- scripts/common.sh@391 -- # pt= 00:22:14.142 00:56:06 -- scripts/common.sh@392 -- # return 1 00:22:14.142 00:56:06 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:14.142 00:56:06 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:22:14.142 00:56:06 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:14.142 00:56:06 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:14.142 00:56:06 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:14.142 00:56:06 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:14.142 00:56:06 -- nvmf/common.sh@656 -- # echo 1 00:22:14.142 00:56:06 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:22:14.142 00:56:06 -- nvmf/common.sh@658 -- # echo 1 00:22:14.142 00:56:06 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:22:14.142 00:56:06 -- nvmf/common.sh@661 -- # echo tcp 00:22:14.142 00:56:06 -- nvmf/common.sh@662 -- # echo 4420 00:22:14.142 00:56:06 -- nvmf/common.sh@663 -- # echo ipv4 00:22:14.142 00:56:06 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:14.142 00:56:06 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:22:14.142 00:22:14.142 Discovery Log Number of Records 2, Generation counter 2 00:22:14.142 =====Discovery Log Entry 0====== 00:22:14.142 trtype: tcp 00:22:14.142 adrfam: ipv4 00:22:14.142 subtype: current discovery subsystem 00:22:14.142 treq: not specified, sq flow control disable supported 00:22:14.142 portid: 1 00:22:14.142 trsvcid: 4420 00:22:14.142 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:14.142 traddr: 10.0.0.1 00:22:14.142 eflags: none 00:22:14.142 sectype: none 00:22:14.142 =====Discovery Log Entry 1====== 00:22:14.142 trtype: tcp 00:22:14.142 adrfam: ipv4 00:22:14.142 subtype: nvme subsystem 00:22:14.142 treq: not specified, sq flow control disable supported 00:22:14.142 portid: 1 00:22:14.142 trsvcid: 4420 00:22:14.142 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:14.142 traddr: 10.0.0.1 00:22:14.142 eflags: none 00:22:14.142 sectype: none 00:22:14.142 00:56:06 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:14.142 00:56:06 -- host/auth.sh@37 -- # echo 0 00:22:14.142 00:56:06 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:14.142 00:56:06 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:14.142 00:56:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:14.142 00:56:06 -- host/auth.sh@44 -- # digest=sha256 00:22:14.142 00:56:06 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:14.142 00:56:06 -- host/auth.sh@44 -- # keyid=1 00:22:14.142 00:56:06 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:14.142 00:56:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:14.142 00:56:06 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:14.142 00:56:06 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:14.142 00:56:06 -- host/auth.sh@100 -- # IFS=, 00:22:14.142 00:56:06 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:22:14.142 00:56:06 -- host/auth.sh@100 -- # IFS=, 00:22:14.142 00:56:06 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:14.142 00:56:06 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:14.142 00:56:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:14.142 00:56:06 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:22:14.142 00:56:06 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:14.142 00:56:06 -- host/auth.sh@68 -- # keyid=1 00:22:14.142 00:56:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:14.142 00:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.142 00:56:06 -- common/autotest_common.sh@10 -- # set +x 00:22:14.142 00:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.142 00:56:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:14.142 00:56:06 -- nvmf/common.sh@717 -- # local ip 00:22:14.142 00:56:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:14.142 00:56:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:14.142 00:56:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.143 00:56:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.143 00:56:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:14.143 00:56:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.143 00:56:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:14.143 00:56:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:14.143 00:56:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:14.143 00:56:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:14.143 00:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.143 00:56:06 -- common/autotest_common.sh@10 -- # set +x 00:22:14.414 nvme0n1 00:22:14.414 00:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.414 00:56:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.414 00:56:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:14.414 00:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.414 00:56:06 -- common/autotest_common.sh@10 -- # set +x 00:22:14.414 00:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.414 00:56:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.414 00:56:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.414 00:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.414 00:56:06 -- common/autotest_common.sh@10 -- # set +x 00:22:14.414 00:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.414 00:56:06 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:22:14.414 00:56:06 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:14.414 00:56:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:14.414 00:56:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:14.414 00:56:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:14.414 00:56:06 -- host/auth.sh@44 -- # digest=sha256 00:22:14.414 00:56:06 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:14.414 00:56:06 -- host/auth.sh@44 -- # keyid=0 00:22:14.414 00:56:06 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:14.414 00:56:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:14.414 00:56:06 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:14.414 00:56:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:14.414 00:56:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:22:14.414 00:56:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:14.414 00:56:06 -- host/auth.sh@68 -- # digest=sha256 00:22:14.414 00:56:06 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:14.414 00:56:06 -- host/auth.sh@68 -- # keyid=0 00:22:14.414 00:56:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:14.414 00:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.414 00:56:06 -- common/autotest_common.sh@10 -- # set +x 00:22:14.414 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.414 00:56:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:14.414 00:56:07 -- nvmf/common.sh@717 -- # local ip 00:22:14.414 00:56:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:14.414 00:56:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:14.414 00:56:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.414 00:56:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.414 00:56:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:14.414 00:56:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.414 00:56:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:14.414 00:56:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:14.414 00:56:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:14.414 00:56:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:14.414 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.414 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:14.673 nvme0n1 00:22:14.673 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.673 00:56:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.673 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.673 00:56:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:14.673 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:14.673 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.673 00:56:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.673 00:56:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.673 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.673 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:14.673 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.673 00:56:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:14.673 00:56:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:14.673 00:56:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:14.673 00:56:07 -- host/auth.sh@44 -- # digest=sha256 00:22:14.673 00:56:07 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:14.673 00:56:07 -- host/auth.sh@44 -- # keyid=1 00:22:14.673 00:56:07 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:14.673 00:56:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:14.673 00:56:07 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:14.673 00:56:07 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:14.673 00:56:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:22:14.673 00:56:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:14.673 00:56:07 -- host/auth.sh@68 -- # digest=sha256 00:22:14.673 00:56:07 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:14.673 00:56:07 -- host/auth.sh@68 -- # keyid=1 00:22:14.673 00:56:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:14.673 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.673 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:14.673 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.673 00:56:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:14.673 00:56:07 -- nvmf/common.sh@717 -- # local ip 00:22:14.673 00:56:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:14.673 00:56:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:14.673 00:56:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.673 00:56:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.673 00:56:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:14.673 00:56:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.673 00:56:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:14.673 00:56:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:14.673 00:56:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:14.673 00:56:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:14.673 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.673 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:14.932 nvme0n1 00:22:14.932 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.932 00:56:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.932 00:56:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:14.932 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.932 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:14.932 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.932 00:56:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.932 00:56:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.932 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.932 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:14.932 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.932 00:56:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:14.932 00:56:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:14.932 00:56:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:14.932 00:56:07 -- host/auth.sh@44 -- # digest=sha256 00:22:14.932 00:56:07 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:14.932 00:56:07 -- host/auth.sh@44 -- # keyid=2 00:22:14.932 00:56:07 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:14.932 00:56:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:14.932 00:56:07 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:14.932 00:56:07 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:14.932 00:56:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:22:14.932 00:56:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:14.932 00:56:07 -- host/auth.sh@68 -- # digest=sha256 00:22:14.932 00:56:07 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:14.932 00:56:07 -- host/auth.sh@68 -- # keyid=2 00:22:14.932 00:56:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:14.932 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.932 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:14.932 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.932 00:56:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:14.932 00:56:07 -- nvmf/common.sh@717 -- # local ip 00:22:14.932 00:56:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:14.932 00:56:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:14.932 00:56:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:14.932 00:56:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:14.932 00:56:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:14.932 00:56:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:14.932 00:56:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:14.932 00:56:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:14.932 00:56:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:14.932 00:56:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:14.932 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.932 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:14.932 nvme0n1 00:22:14.932 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:14.932 00:56:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:14.932 00:56:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:14.932 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:14.932 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:14.932 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.191 00:56:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.191 00:56:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.191 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.191 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:15.191 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.191 00:56:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:15.191 00:56:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:15.191 00:56:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:15.191 00:56:07 -- host/auth.sh@44 -- # digest=sha256 00:22:15.191 00:56:07 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:15.191 00:56:07 -- host/auth.sh@44 -- # keyid=3 00:22:15.191 00:56:07 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:15.191 00:56:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:15.191 00:56:07 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:15.191 00:56:07 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:15.191 00:56:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:22:15.191 00:56:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:15.191 00:56:07 -- host/auth.sh@68 -- # digest=sha256 00:22:15.191 00:56:07 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:15.191 00:56:07 -- host/auth.sh@68 -- # keyid=3 00:22:15.191 00:56:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:15.191 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.191 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:15.191 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.191 00:56:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:15.191 00:56:07 -- nvmf/common.sh@717 -- # local ip 00:22:15.191 00:56:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:15.191 00:56:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:15.191 00:56:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.191 00:56:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.191 00:56:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:15.191 00:56:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.191 00:56:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:15.191 00:56:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:15.191 00:56:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:15.191 00:56:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:15.191 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.191 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:15.191 nvme0n1 00:22:15.191 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.191 00:56:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.191 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.191 00:56:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:15.191 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:15.191 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.191 00:56:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.191 00:56:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.191 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.191 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:15.191 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.191 00:56:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:15.191 00:56:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:15.191 00:56:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:15.191 00:56:07 -- host/auth.sh@44 -- # digest=sha256 00:22:15.191 00:56:07 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:15.191 00:56:07 -- host/auth.sh@44 -- # keyid=4 00:22:15.191 00:56:07 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:15.191 00:56:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:15.191 00:56:07 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:15.191 00:56:07 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:15.191 00:56:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:22:15.191 00:56:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:15.191 00:56:07 -- host/auth.sh@68 -- # digest=sha256 00:22:15.191 00:56:07 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:15.191 00:56:07 -- host/auth.sh@68 -- # keyid=4 00:22:15.191 00:56:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:15.191 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.191 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:15.450 00:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.450 00:56:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:15.450 00:56:07 -- nvmf/common.sh@717 -- # local ip 00:22:15.450 00:56:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:15.450 00:56:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:15.450 00:56:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.450 00:56:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.450 00:56:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:15.450 00:56:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.450 00:56:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:15.450 00:56:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:15.450 00:56:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:15.450 00:56:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:15.450 00:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.450 00:56:07 -- common/autotest_common.sh@10 -- # set +x 00:22:15.450 nvme0n1 00:22:15.450 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.450 00:56:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.450 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.450 00:56:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:15.450 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.450 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.450 00:56:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.450 00:56:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.450 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.450 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.450 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.450 00:56:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.450 00:56:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:15.450 00:56:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:15.450 00:56:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:15.450 00:56:08 -- host/auth.sh@44 -- # digest=sha256 00:22:15.450 00:56:08 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:15.450 00:56:08 -- host/auth.sh@44 -- # keyid=0 00:22:15.450 00:56:08 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:15.450 00:56:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:15.450 00:56:08 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:15.450 00:56:08 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:15.450 00:56:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:22:15.450 00:56:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:15.450 00:56:08 -- host/auth.sh@68 -- # digest=sha256 00:22:15.450 00:56:08 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:15.450 00:56:08 -- host/auth.sh@68 -- # keyid=0 00:22:15.450 00:56:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:15.450 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.450 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.450 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.450 00:56:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:15.450 00:56:08 -- nvmf/common.sh@717 -- # local ip 00:22:15.450 00:56:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:15.450 00:56:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:15.450 00:56:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.450 00:56:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.450 00:56:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:15.450 00:56:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.450 00:56:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:15.450 00:56:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:15.450 00:56:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:15.450 00:56:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:15.450 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.450 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.709 nvme0n1 00:22:15.709 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.709 00:56:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.709 00:56:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:15.709 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.709 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.709 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.709 00:56:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.709 00:56:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.709 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.709 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.709 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.709 00:56:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:15.709 00:56:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:15.709 00:56:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:15.709 00:56:08 -- host/auth.sh@44 -- # digest=sha256 00:22:15.710 00:56:08 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:15.710 00:56:08 -- host/auth.sh@44 -- # keyid=1 00:22:15.710 00:56:08 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:15.710 00:56:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:15.710 00:56:08 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:15.710 00:56:08 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:15.710 00:56:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:22:15.710 00:56:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:15.710 00:56:08 -- host/auth.sh@68 -- # digest=sha256 00:22:15.710 00:56:08 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:15.710 00:56:08 -- host/auth.sh@68 -- # keyid=1 00:22:15.710 00:56:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:15.710 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.710 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.710 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.710 00:56:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:15.710 00:56:08 -- nvmf/common.sh@717 -- # local ip 00:22:15.710 00:56:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:15.710 00:56:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:15.710 00:56:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.710 00:56:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.710 00:56:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:15.710 00:56:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.710 00:56:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:15.710 00:56:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:15.710 00:56:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:15.710 00:56:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:15.710 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.710 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.968 nvme0n1 00:22:15.968 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.968 00:56:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:15.968 00:56:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:15.969 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.969 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.969 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.969 00:56:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.969 00:56:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:15.969 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.969 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.969 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.969 00:56:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:15.969 00:56:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:15.969 00:56:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:15.969 00:56:08 -- host/auth.sh@44 -- # digest=sha256 00:22:15.969 00:56:08 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:15.969 00:56:08 -- host/auth.sh@44 -- # keyid=2 00:22:15.969 00:56:08 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:15.969 00:56:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:15.969 00:56:08 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:15.969 00:56:08 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:15.969 00:56:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:22:15.969 00:56:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:15.969 00:56:08 -- host/auth.sh@68 -- # digest=sha256 00:22:15.969 00:56:08 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:15.969 00:56:08 -- host/auth.sh@68 -- # keyid=2 00:22:15.969 00:56:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:15.969 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.969 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:15.969 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:15.969 00:56:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:15.969 00:56:08 -- nvmf/common.sh@717 -- # local ip 00:22:15.969 00:56:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:15.969 00:56:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:15.969 00:56:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:15.969 00:56:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:15.969 00:56:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:15.969 00:56:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:15.969 00:56:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:15.969 00:56:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:15.969 00:56:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:15.969 00:56:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:15.969 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:15.969 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:16.227 nvme0n1 00:22:16.227 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.227 00:56:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.227 00:56:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:16.227 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.227 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:16.227 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.227 00:56:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.227 00:56:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.227 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.227 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:16.227 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.227 00:56:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:16.227 00:56:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:16.227 00:56:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:16.227 00:56:08 -- host/auth.sh@44 -- # digest=sha256 00:22:16.227 00:56:08 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:16.227 00:56:08 -- host/auth.sh@44 -- # keyid=3 00:22:16.227 00:56:08 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:16.227 00:56:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:16.228 00:56:08 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:16.228 00:56:08 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:16.228 00:56:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:22:16.228 00:56:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:16.228 00:56:08 -- host/auth.sh@68 -- # digest=sha256 00:22:16.228 00:56:08 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:16.228 00:56:08 -- host/auth.sh@68 -- # keyid=3 00:22:16.228 00:56:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:16.228 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.228 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:16.228 00:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.228 00:56:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:16.228 00:56:08 -- nvmf/common.sh@717 -- # local ip 00:22:16.228 00:56:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:16.228 00:56:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:16.228 00:56:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.228 00:56:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.228 00:56:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:16.228 00:56:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.228 00:56:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:16.228 00:56:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:16.228 00:56:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:16.228 00:56:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:16.228 00:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.228 00:56:08 -- common/autotest_common.sh@10 -- # set +x 00:22:16.486 nvme0n1 00:22:16.486 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.486 00:56:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.486 00:56:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:16.486 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.486 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:16.486 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.486 00:56:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.486 00:56:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.486 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.486 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:16.486 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.486 00:56:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:16.486 00:56:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:16.486 00:56:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:16.486 00:56:09 -- host/auth.sh@44 -- # digest=sha256 00:22:16.486 00:56:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:16.486 00:56:09 -- host/auth.sh@44 -- # keyid=4 00:22:16.486 00:56:09 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:16.486 00:56:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:16.486 00:56:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:16.486 00:56:09 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:16.486 00:56:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:22:16.486 00:56:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:16.486 00:56:09 -- host/auth.sh@68 -- # digest=sha256 00:22:16.486 00:56:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:16.486 00:56:09 -- host/auth.sh@68 -- # keyid=4 00:22:16.486 00:56:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:16.486 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.486 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:16.486 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.486 00:56:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:16.486 00:56:09 -- nvmf/common.sh@717 -- # local ip 00:22:16.486 00:56:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:16.486 00:56:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:16.487 00:56:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.487 00:56:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.487 00:56:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:16.487 00:56:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.487 00:56:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:16.487 00:56:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:16.487 00:56:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:16.487 00:56:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:16.487 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.487 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:16.745 nvme0n1 00:22:16.745 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.745 00:56:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:16.745 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.745 00:56:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:16.745 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:16.745 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.745 00:56:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.745 00:56:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:16.745 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.745 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:16.745 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.745 00:56:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:16.745 00:56:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:16.745 00:56:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:16.745 00:56:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:16.745 00:56:09 -- host/auth.sh@44 -- # digest=sha256 00:22:16.745 00:56:09 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:16.745 00:56:09 -- host/auth.sh@44 -- # keyid=0 00:22:16.745 00:56:09 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:16.745 00:56:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:16.745 00:56:09 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:16.745 00:56:09 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:16.745 00:56:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:22:16.745 00:56:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:16.745 00:56:09 -- host/auth.sh@68 -- # digest=sha256 00:22:16.745 00:56:09 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:16.745 00:56:09 -- host/auth.sh@68 -- # keyid=0 00:22:16.745 00:56:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:16.745 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.745 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:16.745 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.745 00:56:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:16.745 00:56:09 -- nvmf/common.sh@717 -- # local ip 00:22:16.745 00:56:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:16.745 00:56:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:16.745 00:56:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:16.745 00:56:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:16.745 00:56:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:16.745 00:56:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:16.745 00:56:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:16.745 00:56:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:16.745 00:56:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:16.745 00:56:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:16.745 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.745 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:17.004 nvme0n1 00:22:17.004 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.004 00:56:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.004 00:56:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:17.004 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.004 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:17.004 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.004 00:56:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.004 00:56:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.004 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.004 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:17.004 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.004 00:56:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:17.004 00:56:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:17.004 00:56:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:17.004 00:56:09 -- host/auth.sh@44 -- # digest=sha256 00:22:17.004 00:56:09 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:17.004 00:56:09 -- host/auth.sh@44 -- # keyid=1 00:22:17.004 00:56:09 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:17.004 00:56:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:17.004 00:56:09 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:17.004 00:56:09 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:17.004 00:56:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:22:17.004 00:56:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:17.004 00:56:09 -- host/auth.sh@68 -- # digest=sha256 00:22:17.004 00:56:09 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:17.004 00:56:09 -- host/auth.sh@68 -- # keyid=1 00:22:17.004 00:56:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:17.004 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.004 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:17.004 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.004 00:56:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:17.004 00:56:09 -- nvmf/common.sh@717 -- # local ip 00:22:17.004 00:56:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:17.004 00:56:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:17.004 00:56:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.004 00:56:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.004 00:56:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:17.004 00:56:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.004 00:56:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:17.004 00:56:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:17.004 00:56:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:17.005 00:56:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:17.005 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.005 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:17.263 nvme0n1 00:22:17.263 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.263 00:56:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.263 00:56:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:17.263 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.263 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:17.263 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.521 00:56:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.522 00:56:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.522 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.522 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:17.522 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.522 00:56:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:17.522 00:56:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:17.522 00:56:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:17.522 00:56:09 -- host/auth.sh@44 -- # digest=sha256 00:22:17.522 00:56:09 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:17.522 00:56:09 -- host/auth.sh@44 -- # keyid=2 00:22:17.522 00:56:09 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:17.522 00:56:09 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:17.522 00:56:09 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:17.522 00:56:09 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:17.522 00:56:09 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:22:17.522 00:56:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:17.522 00:56:09 -- host/auth.sh@68 -- # digest=sha256 00:22:17.522 00:56:09 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:17.522 00:56:09 -- host/auth.sh@68 -- # keyid=2 00:22:17.522 00:56:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:17.522 00:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.522 00:56:09 -- common/autotest_common.sh@10 -- # set +x 00:22:17.522 00:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.522 00:56:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:17.522 00:56:09 -- nvmf/common.sh@717 -- # local ip 00:22:17.522 00:56:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:17.522 00:56:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:17.522 00:56:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.522 00:56:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.522 00:56:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:17.522 00:56:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.522 00:56:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:17.522 00:56:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:17.522 00:56:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:17.522 00:56:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:17.522 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.522 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:17.781 nvme0n1 00:22:17.781 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.781 00:56:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:17.781 00:56:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:17.781 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.781 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:17.781 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.781 00:56:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.781 00:56:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.781 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.781 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:17.781 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.781 00:56:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:17.781 00:56:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:17.781 00:56:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:17.781 00:56:10 -- host/auth.sh@44 -- # digest=sha256 00:22:17.781 00:56:10 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:17.781 00:56:10 -- host/auth.sh@44 -- # keyid=3 00:22:17.781 00:56:10 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:17.781 00:56:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:17.781 00:56:10 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:17.781 00:56:10 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:17.781 00:56:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:22:17.781 00:56:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:17.781 00:56:10 -- host/auth.sh@68 -- # digest=sha256 00:22:17.781 00:56:10 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:17.781 00:56:10 -- host/auth.sh@68 -- # keyid=3 00:22:17.781 00:56:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:17.781 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.781 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:17.781 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:17.781 00:56:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:17.781 00:56:10 -- nvmf/common.sh@717 -- # local ip 00:22:17.781 00:56:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:17.781 00:56:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:17.781 00:56:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:17.781 00:56:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:17.781 00:56:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:17.781 00:56:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:17.781 00:56:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:17.781 00:56:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:17.781 00:56:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:17.781 00:56:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:17.781 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:17.781 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:18.039 nvme0n1 00:22:18.039 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.039 00:56:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:18.039 00:56:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.039 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.039 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:18.039 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.039 00:56:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.039 00:56:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.039 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.039 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:18.039 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.039 00:56:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:18.039 00:56:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:18.039 00:56:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:18.039 00:56:10 -- host/auth.sh@44 -- # digest=sha256 00:22:18.039 00:56:10 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:18.039 00:56:10 -- host/auth.sh@44 -- # keyid=4 00:22:18.039 00:56:10 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:18.039 00:56:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:18.039 00:56:10 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:18.039 00:56:10 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:18.039 00:56:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:22:18.039 00:56:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:18.039 00:56:10 -- host/auth.sh@68 -- # digest=sha256 00:22:18.039 00:56:10 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:18.039 00:56:10 -- host/auth.sh@68 -- # keyid=4 00:22:18.039 00:56:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:18.039 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.039 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:18.039 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.039 00:56:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:18.039 00:56:10 -- nvmf/common.sh@717 -- # local ip 00:22:18.039 00:56:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:18.039 00:56:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:18.039 00:56:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.039 00:56:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.039 00:56:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:18.039 00:56:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:18.040 00:56:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:18.040 00:56:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:18.040 00:56:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:18.040 00:56:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:18.040 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.040 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:18.323 nvme0n1 00:22:18.323 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.323 00:56:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:18.323 00:56:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.323 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.323 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:18.323 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.323 00:56:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.323 00:56:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.323 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.323 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:18.323 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.323 00:56:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.323 00:56:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:18.323 00:56:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:18.323 00:56:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:18.323 00:56:10 -- host/auth.sh@44 -- # digest=sha256 00:22:18.323 00:56:10 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:18.323 00:56:10 -- host/auth.sh@44 -- # keyid=0 00:22:18.323 00:56:10 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:18.323 00:56:10 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:18.323 00:56:10 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:18.323 00:56:10 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:18.323 00:56:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:22:18.323 00:56:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:18.323 00:56:10 -- host/auth.sh@68 -- # digest=sha256 00:22:18.323 00:56:10 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:18.323 00:56:10 -- host/auth.sh@68 -- # keyid=0 00:22:18.323 00:56:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:18.323 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.323 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:18.323 00:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.323 00:56:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:18.323 00:56:10 -- nvmf/common.sh@717 -- # local ip 00:22:18.323 00:56:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:18.324 00:56:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:18.324 00:56:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.324 00:56:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.324 00:56:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:18.324 00:56:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:18.324 00:56:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:18.324 00:56:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:18.324 00:56:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:18.324 00:56:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:18.324 00:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.324 00:56:10 -- common/autotest_common.sh@10 -- # set +x 00:22:18.889 nvme0n1 00:22:18.889 00:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.889 00:56:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:18.889 00:56:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:18.889 00:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.889 00:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:18.889 00:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.889 00:56:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.889 00:56:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.889 00:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.889 00:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:18.889 00:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.889 00:56:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:18.889 00:56:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:18.889 00:56:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:18.889 00:56:11 -- host/auth.sh@44 -- # digest=sha256 00:22:18.889 00:56:11 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:18.889 00:56:11 -- host/auth.sh@44 -- # keyid=1 00:22:18.889 00:56:11 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:18.889 00:56:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:18.889 00:56:11 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:18.889 00:56:11 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:18.889 00:56:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:22:18.889 00:56:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:18.889 00:56:11 -- host/auth.sh@68 -- # digest=sha256 00:22:18.889 00:56:11 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:18.889 00:56:11 -- host/auth.sh@68 -- # keyid=1 00:22:18.889 00:56:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:18.889 00:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.889 00:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:18.889 00:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.889 00:56:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:18.889 00:56:11 -- nvmf/common.sh@717 -- # local ip 00:22:18.889 00:56:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:18.889 00:56:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:18.889 00:56:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:18.889 00:56:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:18.889 00:56:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:18.889 00:56:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:18.889 00:56:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:18.889 00:56:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:18.889 00:56:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:18.889 00:56:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:18.889 00:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.889 00:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.147 nvme0n1 00:22:19.147 00:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.147 00:56:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.147 00:56:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:19.147 00:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.147 00:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.147 00:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.147 00:56:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.147 00:56:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.147 00:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.147 00:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.147 00:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.147 00:56:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:19.147 00:56:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:19.147 00:56:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:19.147 00:56:11 -- host/auth.sh@44 -- # digest=sha256 00:22:19.147 00:56:11 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:19.147 00:56:11 -- host/auth.sh@44 -- # keyid=2 00:22:19.147 00:56:11 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:19.147 00:56:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:19.147 00:56:11 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:19.147 00:56:11 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:19.147 00:56:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:22:19.147 00:56:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:19.147 00:56:11 -- host/auth.sh@68 -- # digest=sha256 00:22:19.147 00:56:11 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:19.147 00:56:11 -- host/auth.sh@68 -- # keyid=2 00:22:19.147 00:56:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:19.147 00:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.147 00:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.147 00:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.147 00:56:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:19.147 00:56:11 -- nvmf/common.sh@717 -- # local ip 00:22:19.147 00:56:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:19.147 00:56:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:19.147 00:56:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.147 00:56:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.147 00:56:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:19.147 00:56:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.147 00:56:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:19.147 00:56:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:19.147 00:56:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:19.147 00:56:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:19.147 00:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.147 00:56:11 -- common/autotest_common.sh@10 -- # set +x 00:22:19.714 nvme0n1 00:22:19.714 00:56:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.714 00:56:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:19.714 00:56:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.714 00:56:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.714 00:56:12 -- common/autotest_common.sh@10 -- # set +x 00:22:19.714 00:56:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.714 00:56:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.714 00:56:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.714 00:56:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.714 00:56:12 -- common/autotest_common.sh@10 -- # set +x 00:22:19.714 00:56:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.714 00:56:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:19.714 00:56:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:19.714 00:56:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:19.714 00:56:12 -- host/auth.sh@44 -- # digest=sha256 00:22:19.714 00:56:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:19.714 00:56:12 -- host/auth.sh@44 -- # keyid=3 00:22:19.714 00:56:12 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:19.714 00:56:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:19.714 00:56:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:19.715 00:56:12 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:19.715 00:56:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:22:19.715 00:56:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:19.715 00:56:12 -- host/auth.sh@68 -- # digest=sha256 00:22:19.715 00:56:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:19.715 00:56:12 -- host/auth.sh@68 -- # keyid=3 00:22:19.715 00:56:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:19.715 00:56:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.715 00:56:12 -- common/autotest_common.sh@10 -- # set +x 00:22:19.715 00:56:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.715 00:56:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:19.715 00:56:12 -- nvmf/common.sh@717 -- # local ip 00:22:19.715 00:56:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:19.715 00:56:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:19.715 00:56:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.715 00:56:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.715 00:56:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:19.715 00:56:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.715 00:56:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:19.715 00:56:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:19.715 00:56:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:19.715 00:56:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:19.715 00:56:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.715 00:56:12 -- common/autotest_common.sh@10 -- # set +x 00:22:19.973 nvme0n1 00:22:19.973 00:56:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.973 00:56:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:19.973 00:56:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:19.973 00:56:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.973 00:56:12 -- common/autotest_common.sh@10 -- # set +x 00:22:19.973 00:56:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.973 00:56:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.973 00:56:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:19.973 00:56:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.974 00:56:12 -- common/autotest_common.sh@10 -- # set +x 00:22:19.974 00:56:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.974 00:56:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:19.974 00:56:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:19.974 00:56:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:19.974 00:56:12 -- host/auth.sh@44 -- # digest=sha256 00:22:19.974 00:56:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:19.974 00:56:12 -- host/auth.sh@44 -- # keyid=4 00:22:19.974 00:56:12 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:19.974 00:56:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:19.974 00:56:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:19.974 00:56:12 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:19.974 00:56:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:22:19.974 00:56:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:19.974 00:56:12 -- host/auth.sh@68 -- # digest=sha256 00:22:19.974 00:56:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:19.974 00:56:12 -- host/auth.sh@68 -- # keyid=4 00:22:19.974 00:56:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:19.974 00:56:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.974 00:56:12 -- common/autotest_common.sh@10 -- # set +x 00:22:19.974 00:56:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:19.974 00:56:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:19.974 00:56:12 -- nvmf/common.sh@717 -- # local ip 00:22:19.974 00:56:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:19.974 00:56:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:19.974 00:56:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:19.974 00:56:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:19.974 00:56:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:19.974 00:56:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:19.974 00:56:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:19.974 00:56:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:19.974 00:56:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:19.974 00:56:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:19.974 00:56:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:19.974 00:56:12 -- common/autotest_common.sh@10 -- # set +x 00:22:20.540 nvme0n1 00:22:20.540 00:56:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.540 00:56:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:20.540 00:56:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:20.540 00:56:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.540 00:56:13 -- common/autotest_common.sh@10 -- # set +x 00:22:20.540 00:56:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.540 00:56:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.540 00:56:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.540 00:56:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.540 00:56:13 -- common/autotest_common.sh@10 -- # set +x 00:22:20.540 00:56:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.540 00:56:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.540 00:56:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:20.540 00:56:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:20.540 00:56:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:20.540 00:56:13 -- host/auth.sh@44 -- # digest=sha256 00:22:20.540 00:56:13 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:20.540 00:56:13 -- host/auth.sh@44 -- # keyid=0 00:22:20.540 00:56:13 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:20.540 00:56:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:20.540 00:56:13 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:20.540 00:56:13 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:20.540 00:56:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:22:20.540 00:56:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:20.540 00:56:13 -- host/auth.sh@68 -- # digest=sha256 00:22:20.540 00:56:13 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:20.540 00:56:13 -- host/auth.sh@68 -- # keyid=0 00:22:20.540 00:56:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:20.540 00:56:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.540 00:56:13 -- common/autotest_common.sh@10 -- # set +x 00:22:20.540 00:56:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:20.540 00:56:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:20.540 00:56:13 -- nvmf/common.sh@717 -- # local ip 00:22:20.540 00:56:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:20.540 00:56:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:20.540 00:56:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:20.540 00:56:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:20.540 00:56:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:20.540 00:56:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:20.540 00:56:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:20.540 00:56:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:20.540 00:56:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:20.540 00:56:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:20.540 00:56:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:20.540 00:56:13 -- common/autotest_common.sh@10 -- # set +x 00:22:21.105 nvme0n1 00:22:21.105 00:56:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.105 00:56:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:21.105 00:56:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.105 00:56:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.105 00:56:13 -- common/autotest_common.sh@10 -- # set +x 00:22:21.105 00:56:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.105 00:56:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.106 00:56:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.106 00:56:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.106 00:56:13 -- common/autotest_common.sh@10 -- # set +x 00:22:21.106 00:56:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.106 00:56:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:21.106 00:56:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:21.106 00:56:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:21.106 00:56:13 -- host/auth.sh@44 -- # digest=sha256 00:22:21.106 00:56:13 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:21.106 00:56:13 -- host/auth.sh@44 -- # keyid=1 00:22:21.106 00:56:13 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:21.106 00:56:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:21.106 00:56:13 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:21.106 00:56:13 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:21.106 00:56:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:22:21.106 00:56:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:21.106 00:56:13 -- host/auth.sh@68 -- # digest=sha256 00:22:21.106 00:56:13 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:21.106 00:56:13 -- host/auth.sh@68 -- # keyid=1 00:22:21.106 00:56:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:21.106 00:56:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.106 00:56:13 -- common/autotest_common.sh@10 -- # set +x 00:22:21.106 00:56:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.106 00:56:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:21.106 00:56:13 -- nvmf/common.sh@717 -- # local ip 00:22:21.106 00:56:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:21.106 00:56:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:21.106 00:56:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.106 00:56:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.106 00:56:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:21.106 00:56:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.106 00:56:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:21.106 00:56:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:21.106 00:56:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:21.106 00:56:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:21.106 00:56:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.106 00:56:13 -- common/autotest_common.sh@10 -- # set +x 00:22:21.672 nvme0n1 00:22:21.672 00:56:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.672 00:56:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:21.672 00:56:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:21.672 00:56:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.672 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.673 00:56:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.673 00:56:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.673 00:56:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:21.673 00:56:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.673 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.673 00:56:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.673 00:56:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:21.673 00:56:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:21.673 00:56:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:21.673 00:56:14 -- host/auth.sh@44 -- # digest=sha256 00:22:21.673 00:56:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:21.673 00:56:14 -- host/auth.sh@44 -- # keyid=2 00:22:21.673 00:56:14 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:21.673 00:56:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:21.673 00:56:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:21.673 00:56:14 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:21.673 00:56:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:22:21.673 00:56:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:21.673 00:56:14 -- host/auth.sh@68 -- # digest=sha256 00:22:21.673 00:56:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:21.673 00:56:14 -- host/auth.sh@68 -- # keyid=2 00:22:21.673 00:56:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:21.673 00:56:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.673 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:22:21.673 00:56:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:21.673 00:56:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:21.673 00:56:14 -- nvmf/common.sh@717 -- # local ip 00:22:21.673 00:56:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:21.673 00:56:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:21.673 00:56:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:21.673 00:56:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:21.673 00:56:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:21.673 00:56:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:21.673 00:56:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:21.673 00:56:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:21.673 00:56:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:21.673 00:56:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:21.673 00:56:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:21.673 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:22:22.608 nvme0n1 00:22:22.608 00:56:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.608 00:56:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:22.608 00:56:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:22.608 00:56:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.608 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:22:22.608 00:56:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.608 00:56:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.608 00:56:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:22.608 00:56:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.608 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:22:22.608 00:56:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.608 00:56:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:22.608 00:56:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:22.608 00:56:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:22.608 00:56:14 -- host/auth.sh@44 -- # digest=sha256 00:22:22.608 00:56:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:22.608 00:56:14 -- host/auth.sh@44 -- # keyid=3 00:22:22.608 00:56:14 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:22.608 00:56:14 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:22.608 00:56:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:22.608 00:56:14 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:22.608 00:56:14 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:22:22.608 00:56:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:22.608 00:56:14 -- host/auth.sh@68 -- # digest=sha256 00:22:22.608 00:56:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:22.608 00:56:14 -- host/auth.sh@68 -- # keyid=3 00:22:22.608 00:56:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:22.608 00:56:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.608 00:56:14 -- common/autotest_common.sh@10 -- # set +x 00:22:22.608 00:56:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:22.608 00:56:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:22.608 00:56:15 -- nvmf/common.sh@717 -- # local ip 00:22:22.608 00:56:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:22.608 00:56:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:22.608 00:56:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:22.608 00:56:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:22.608 00:56:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:22.608 00:56:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:22.608 00:56:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:22.608 00:56:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:22.608 00:56:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:22.608 00:56:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:22.608 00:56:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:22.608 00:56:15 -- common/autotest_common.sh@10 -- # set +x 00:22:23.175 nvme0n1 00:22:23.175 00:56:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.175 00:56:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.175 00:56:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.175 00:56:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:23.175 00:56:15 -- common/autotest_common.sh@10 -- # set +x 00:22:23.175 00:56:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.175 00:56:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.175 00:56:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.175 00:56:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.175 00:56:15 -- common/autotest_common.sh@10 -- # set +x 00:22:23.175 00:56:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.175 00:56:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:23.175 00:56:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:23.175 00:56:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:23.175 00:56:15 -- host/auth.sh@44 -- # digest=sha256 00:22:23.175 00:56:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:23.175 00:56:15 -- host/auth.sh@44 -- # keyid=4 00:22:23.175 00:56:15 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:23.175 00:56:15 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:23.175 00:56:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:23.175 00:56:15 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:23.175 00:56:15 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:22:23.175 00:56:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:23.175 00:56:15 -- host/auth.sh@68 -- # digest=sha256 00:22:23.175 00:56:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:23.175 00:56:15 -- host/auth.sh@68 -- # keyid=4 00:22:23.175 00:56:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:23.175 00:56:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.175 00:56:15 -- common/autotest_common.sh@10 -- # set +x 00:22:23.175 00:56:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.175 00:56:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:23.175 00:56:15 -- nvmf/common.sh@717 -- # local ip 00:22:23.175 00:56:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:23.175 00:56:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:23.175 00:56:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.175 00:56:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.175 00:56:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:23.175 00:56:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.175 00:56:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:23.175 00:56:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:23.175 00:56:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:23.175 00:56:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:23.175 00:56:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.175 00:56:15 -- common/autotest_common.sh@10 -- # set +x 00:22:23.742 nvme0n1 00:22:23.742 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.742 00:56:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:23.742 00:56:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:23.742 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.742 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:23.742 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.742 00:56:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.742 00:56:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.742 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.742 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:23.742 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.742 00:56:16 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:22:23.742 00:56:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:23.742 00:56:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:23.742 00:56:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:22:23.742 00:56:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:23.742 00:56:16 -- host/auth.sh@44 -- # digest=sha384 00:22:23.742 00:56:16 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:23.742 00:56:16 -- host/auth.sh@44 -- # keyid=0 00:22:23.742 00:56:16 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:23.742 00:56:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:23.742 00:56:16 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:23.742 00:56:16 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:23.742 00:56:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:22:23.742 00:56:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:23.742 00:56:16 -- host/auth.sh@68 -- # digest=sha384 00:22:23.742 00:56:16 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:23.742 00:56:16 -- host/auth.sh@68 -- # keyid=0 00:22:23.742 00:56:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:23.742 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.742 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:23.742 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:23.742 00:56:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:23.742 00:56:16 -- nvmf/common.sh@717 -- # local ip 00:22:23.742 00:56:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:23.742 00:56:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:23.742 00:56:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.742 00:56:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.742 00:56:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:23.742 00:56:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:23.742 00:56:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:23.742 00:56:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:23.742 00:56:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:23.742 00:56:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:23.742 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:23.742 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.001 nvme0n1 00:22:24.001 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.001 00:56:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.001 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.001 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.001 00:56:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:24.001 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.001 00:56:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.001 00:56:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.001 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.001 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.001 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.001 00:56:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:24.001 00:56:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:22:24.001 00:56:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:24.001 00:56:16 -- host/auth.sh@44 -- # digest=sha384 00:22:24.001 00:56:16 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:24.001 00:56:16 -- host/auth.sh@44 -- # keyid=1 00:22:24.001 00:56:16 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:24.001 00:56:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:24.001 00:56:16 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:24.001 00:56:16 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:24.001 00:56:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:22:24.001 00:56:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:24.001 00:56:16 -- host/auth.sh@68 -- # digest=sha384 00:22:24.001 00:56:16 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:24.001 00:56:16 -- host/auth.sh@68 -- # keyid=1 00:22:24.001 00:56:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:24.001 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.001 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.001 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.001 00:56:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:24.001 00:56:16 -- nvmf/common.sh@717 -- # local ip 00:22:24.001 00:56:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.001 00:56:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.001 00:56:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.001 00:56:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.001 00:56:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:24.001 00:56:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.001 00:56:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:24.001 00:56:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:24.001 00:56:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:24.001 00:56:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:24.001 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.001 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.001 nvme0n1 00:22:24.001 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.001 00:56:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:24.001 00:56:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.001 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.001 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.001 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.001 00:56:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.001 00:56:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.001 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.001 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.259 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.259 00:56:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:24.259 00:56:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:22:24.259 00:56:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:24.259 00:56:16 -- host/auth.sh@44 -- # digest=sha384 00:22:24.259 00:56:16 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:24.259 00:56:16 -- host/auth.sh@44 -- # keyid=2 00:22:24.259 00:56:16 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:24.259 00:56:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:24.259 00:56:16 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:24.259 00:56:16 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:24.259 00:56:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:22:24.259 00:56:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:24.259 00:56:16 -- host/auth.sh@68 -- # digest=sha384 00:22:24.259 00:56:16 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:24.259 00:56:16 -- host/auth.sh@68 -- # keyid=2 00:22:24.259 00:56:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:24.259 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.260 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.260 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.260 00:56:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:24.260 00:56:16 -- nvmf/common.sh@717 -- # local ip 00:22:24.260 00:56:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.260 00:56:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.260 00:56:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.260 00:56:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.260 00:56:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:24.260 00:56:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.260 00:56:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:24.260 00:56:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:24.260 00:56:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:24.260 00:56:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:24.260 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.260 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.260 nvme0n1 00:22:24.260 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.260 00:56:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:24.260 00:56:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.260 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.260 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.260 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.260 00:56:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.260 00:56:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.260 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.260 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.260 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.260 00:56:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:24.260 00:56:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:22:24.260 00:56:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:24.260 00:56:16 -- host/auth.sh@44 -- # digest=sha384 00:22:24.260 00:56:16 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:24.260 00:56:16 -- host/auth.sh@44 -- # keyid=3 00:22:24.260 00:56:16 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:24.260 00:56:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:24.260 00:56:16 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:24.260 00:56:16 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:24.260 00:56:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:22:24.260 00:56:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:24.260 00:56:16 -- host/auth.sh@68 -- # digest=sha384 00:22:24.260 00:56:16 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:24.260 00:56:16 -- host/auth.sh@68 -- # keyid=3 00:22:24.260 00:56:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:24.260 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.260 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.260 00:56:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.260 00:56:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:24.260 00:56:16 -- nvmf/common.sh@717 -- # local ip 00:22:24.260 00:56:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.260 00:56:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.260 00:56:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.260 00:56:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.260 00:56:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:24.260 00:56:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.260 00:56:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:24.260 00:56:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:24.260 00:56:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:24.260 00:56:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:24.260 00:56:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.260 00:56:16 -- common/autotest_common.sh@10 -- # set +x 00:22:24.518 nvme0n1 00:22:24.518 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.518 00:56:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.518 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.518 00:56:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:24.518 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:24.518 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.518 00:56:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.518 00:56:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.518 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.518 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:24.518 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.518 00:56:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:24.518 00:56:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:22:24.518 00:56:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:24.518 00:56:17 -- host/auth.sh@44 -- # digest=sha384 00:22:24.518 00:56:17 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:24.518 00:56:17 -- host/auth.sh@44 -- # keyid=4 00:22:24.518 00:56:17 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:24.519 00:56:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:24.519 00:56:17 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:24.519 00:56:17 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:24.519 00:56:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:22:24.519 00:56:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:24.519 00:56:17 -- host/auth.sh@68 -- # digest=sha384 00:22:24.519 00:56:17 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:24.519 00:56:17 -- host/auth.sh@68 -- # keyid=4 00:22:24.519 00:56:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:24.519 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.519 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:24.519 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.519 00:56:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:24.519 00:56:17 -- nvmf/common.sh@717 -- # local ip 00:22:24.519 00:56:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.519 00:56:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.519 00:56:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.519 00:56:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.519 00:56:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:24.519 00:56:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.519 00:56:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:24.519 00:56:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:24.519 00:56:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:24.519 00:56:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:24.519 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.519 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:24.777 nvme0n1 00:22:24.777 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.777 00:56:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:24.777 00:56:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:24.777 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.777 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:24.777 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.777 00:56:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.777 00:56:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.777 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.777 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:24.777 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.777 00:56:17 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.777 00:56:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:24.777 00:56:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:22:24.777 00:56:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:24.777 00:56:17 -- host/auth.sh@44 -- # digest=sha384 00:22:24.777 00:56:17 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:24.777 00:56:17 -- host/auth.sh@44 -- # keyid=0 00:22:24.777 00:56:17 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:24.777 00:56:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:24.777 00:56:17 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:24.777 00:56:17 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:24.777 00:56:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:22:24.777 00:56:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:24.777 00:56:17 -- host/auth.sh@68 -- # digest=sha384 00:22:24.777 00:56:17 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:24.777 00:56:17 -- host/auth.sh@68 -- # keyid=0 00:22:24.777 00:56:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:24.777 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.777 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:24.777 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:24.777 00:56:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:24.777 00:56:17 -- nvmf/common.sh@717 -- # local ip 00:22:24.777 00:56:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:24.777 00:56:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:24.777 00:56:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:24.777 00:56:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:24.777 00:56:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:24.777 00:56:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:24.777 00:56:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:24.777 00:56:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:24.777 00:56:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:24.777 00:56:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:24.777 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:24.777 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.036 nvme0n1 00:22:25.036 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.036 00:56:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:25.036 00:56:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.036 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.036 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.036 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.036 00:56:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.036 00:56:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.036 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.036 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.036 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.036 00:56:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:25.036 00:56:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:22:25.036 00:56:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:25.036 00:56:17 -- host/auth.sh@44 -- # digest=sha384 00:22:25.036 00:56:17 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:25.036 00:56:17 -- host/auth.sh@44 -- # keyid=1 00:22:25.036 00:56:17 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:25.036 00:56:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:25.036 00:56:17 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:25.036 00:56:17 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:25.036 00:56:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:22:25.036 00:56:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:25.036 00:56:17 -- host/auth.sh@68 -- # digest=sha384 00:22:25.036 00:56:17 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:25.036 00:56:17 -- host/auth.sh@68 -- # keyid=1 00:22:25.036 00:56:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.036 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.036 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.036 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.036 00:56:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:25.036 00:56:17 -- nvmf/common.sh@717 -- # local ip 00:22:25.036 00:56:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:25.036 00:56:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:25.036 00:56:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.036 00:56:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.036 00:56:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:25.036 00:56:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.036 00:56:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:25.036 00:56:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:25.036 00:56:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:25.036 00:56:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:25.036 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.036 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.295 nvme0n1 00:22:25.295 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.295 00:56:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.295 00:56:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:25.295 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.295 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.295 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.295 00:56:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.295 00:56:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.295 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.295 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.295 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.295 00:56:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:25.295 00:56:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:22:25.295 00:56:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:25.295 00:56:17 -- host/auth.sh@44 -- # digest=sha384 00:22:25.295 00:56:17 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:25.295 00:56:17 -- host/auth.sh@44 -- # keyid=2 00:22:25.295 00:56:17 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:25.295 00:56:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:25.295 00:56:17 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:25.295 00:56:17 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:25.295 00:56:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:22:25.295 00:56:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:25.295 00:56:17 -- host/auth.sh@68 -- # digest=sha384 00:22:25.295 00:56:17 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:25.295 00:56:17 -- host/auth.sh@68 -- # keyid=2 00:22:25.295 00:56:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.295 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.295 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.295 00:56:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.295 00:56:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:25.295 00:56:17 -- nvmf/common.sh@717 -- # local ip 00:22:25.295 00:56:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:25.295 00:56:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:25.295 00:56:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.295 00:56:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.295 00:56:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:25.295 00:56:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.295 00:56:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:25.295 00:56:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:25.295 00:56:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:25.295 00:56:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:25.295 00:56:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.295 00:56:17 -- common/autotest_common.sh@10 -- # set +x 00:22:25.553 nvme0n1 00:22:25.553 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.553 00:56:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:25.553 00:56:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.553 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.553 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.553 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.553 00:56:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.553 00:56:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.553 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.553 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.553 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.553 00:56:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:25.553 00:56:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:22:25.553 00:56:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:25.553 00:56:18 -- host/auth.sh@44 -- # digest=sha384 00:22:25.553 00:56:18 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:25.553 00:56:18 -- host/auth.sh@44 -- # keyid=3 00:22:25.553 00:56:18 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:25.553 00:56:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:25.553 00:56:18 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:25.553 00:56:18 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:25.553 00:56:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:22:25.553 00:56:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:25.553 00:56:18 -- host/auth.sh@68 -- # digest=sha384 00:22:25.553 00:56:18 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:25.553 00:56:18 -- host/auth.sh@68 -- # keyid=3 00:22:25.553 00:56:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.553 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.553 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.553 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.553 00:56:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:25.553 00:56:18 -- nvmf/common.sh@717 -- # local ip 00:22:25.553 00:56:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:25.553 00:56:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:25.553 00:56:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.553 00:56:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.553 00:56:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:25.553 00:56:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.553 00:56:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:25.553 00:56:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:25.553 00:56:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:25.553 00:56:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:25.553 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.553 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.553 nvme0n1 00:22:25.553 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.553 00:56:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.553 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.553 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.553 00:56:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:25.812 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.812 00:56:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.812 00:56:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:25.812 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.812 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.812 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.812 00:56:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:25.812 00:56:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:22:25.812 00:56:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:25.812 00:56:18 -- host/auth.sh@44 -- # digest=sha384 00:22:25.812 00:56:18 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:25.812 00:56:18 -- host/auth.sh@44 -- # keyid=4 00:22:25.812 00:56:18 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:25.812 00:56:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:25.812 00:56:18 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:25.812 00:56:18 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:25.812 00:56:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:22:25.812 00:56:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:25.812 00:56:18 -- host/auth.sh@68 -- # digest=sha384 00:22:25.812 00:56:18 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:25.812 00:56:18 -- host/auth.sh@68 -- # keyid=4 00:22:25.812 00:56:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.812 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.812 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.812 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.812 00:56:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:25.812 00:56:18 -- nvmf/common.sh@717 -- # local ip 00:22:25.812 00:56:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:25.813 00:56:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:25.813 00:56:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:25.813 00:56:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:25.813 00:56:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:25.813 00:56:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:25.813 00:56:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:25.813 00:56:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:25.813 00:56:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:25.813 00:56:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:25.813 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.813 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.813 nvme0n1 00:22:25.813 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:25.813 00:56:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:25.813 00:56:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:25.813 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:25.813 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:25.813 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.070 00:56:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.070 00:56:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.070 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.070 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:26.070 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.070 00:56:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:26.070 00:56:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:26.070 00:56:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:22:26.070 00:56:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:26.070 00:56:18 -- host/auth.sh@44 -- # digest=sha384 00:22:26.070 00:56:18 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:26.070 00:56:18 -- host/auth.sh@44 -- # keyid=0 00:22:26.070 00:56:18 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:26.070 00:56:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:26.070 00:56:18 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:26.070 00:56:18 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:26.070 00:56:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:22:26.070 00:56:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:26.070 00:56:18 -- host/auth.sh@68 -- # digest=sha384 00:22:26.070 00:56:18 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:26.070 00:56:18 -- host/auth.sh@68 -- # keyid=0 00:22:26.070 00:56:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:26.070 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.070 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:26.070 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.070 00:56:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:26.070 00:56:18 -- nvmf/common.sh@717 -- # local ip 00:22:26.070 00:56:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.070 00:56:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.070 00:56:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.070 00:56:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.070 00:56:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:26.070 00:56:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.070 00:56:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:26.070 00:56:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:26.070 00:56:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:26.070 00:56:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:26.070 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.070 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:26.329 nvme0n1 00:22:26.329 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.329 00:56:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:26.329 00:56:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.329 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.329 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:26.329 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.329 00:56:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.329 00:56:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.329 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.329 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:26.329 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.329 00:56:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:26.329 00:56:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:22:26.329 00:56:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:26.329 00:56:18 -- host/auth.sh@44 -- # digest=sha384 00:22:26.329 00:56:18 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:26.329 00:56:18 -- host/auth.sh@44 -- # keyid=1 00:22:26.329 00:56:18 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:26.329 00:56:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:26.329 00:56:18 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:26.329 00:56:18 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:26.329 00:56:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:22:26.329 00:56:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:26.329 00:56:18 -- host/auth.sh@68 -- # digest=sha384 00:22:26.329 00:56:18 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:26.329 00:56:18 -- host/auth.sh@68 -- # keyid=1 00:22:26.329 00:56:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:26.329 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.329 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:26.329 00:56:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.329 00:56:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:26.329 00:56:18 -- nvmf/common.sh@717 -- # local ip 00:22:26.329 00:56:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.329 00:56:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.329 00:56:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.329 00:56:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.329 00:56:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:26.329 00:56:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.329 00:56:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:26.329 00:56:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:26.329 00:56:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:26.329 00:56:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:26.329 00:56:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.329 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:26.587 nvme0n1 00:22:26.587 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.587 00:56:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.587 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.587 00:56:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:26.587 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.587 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.587 00:56:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.587 00:56:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.587 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.587 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.587 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.587 00:56:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:26.587 00:56:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:22:26.587 00:56:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:26.587 00:56:19 -- host/auth.sh@44 -- # digest=sha384 00:22:26.587 00:56:19 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:26.587 00:56:19 -- host/auth.sh@44 -- # keyid=2 00:22:26.587 00:56:19 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:26.587 00:56:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:26.587 00:56:19 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:26.587 00:56:19 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:26.587 00:56:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:22:26.587 00:56:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:26.587 00:56:19 -- host/auth.sh@68 -- # digest=sha384 00:22:26.587 00:56:19 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:26.587 00:56:19 -- host/auth.sh@68 -- # keyid=2 00:22:26.587 00:56:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:26.587 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.587 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.587 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.587 00:56:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:26.587 00:56:19 -- nvmf/common.sh@717 -- # local ip 00:22:26.587 00:56:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.587 00:56:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.587 00:56:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.587 00:56:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.587 00:56:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:26.587 00:56:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.587 00:56:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:26.587 00:56:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:26.587 00:56:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:26.587 00:56:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:26.587 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.587 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.845 nvme0n1 00:22:26.845 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.845 00:56:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:26.845 00:56:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:26.845 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.845 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.845 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.845 00:56:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.845 00:56:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:26.845 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.845 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.845 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.845 00:56:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:26.845 00:56:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:22:26.845 00:56:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:26.845 00:56:19 -- host/auth.sh@44 -- # digest=sha384 00:22:26.845 00:56:19 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:26.845 00:56:19 -- host/auth.sh@44 -- # keyid=3 00:22:26.845 00:56:19 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:26.845 00:56:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:26.845 00:56:19 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:26.845 00:56:19 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:26.845 00:56:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:22:26.845 00:56:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:26.845 00:56:19 -- host/auth.sh@68 -- # digest=sha384 00:22:26.845 00:56:19 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:26.845 00:56:19 -- host/auth.sh@68 -- # keyid=3 00:22:26.845 00:56:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:26.845 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.845 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:26.845 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.845 00:56:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:26.845 00:56:19 -- nvmf/common.sh@717 -- # local ip 00:22:26.845 00:56:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:26.845 00:56:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:26.845 00:56:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:26.845 00:56:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:26.845 00:56:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:26.845 00:56:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:26.845 00:56:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:26.845 00:56:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:26.845 00:56:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:26.845 00:56:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:26.845 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.845 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:27.103 nvme0n1 00:22:27.103 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.103 00:56:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.103 00:56:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.103 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.103 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:27.103 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.103 00:56:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.103 00:56:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.103 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.103 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:27.103 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.103 00:56:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:27.103 00:56:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:22:27.103 00:56:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:27.103 00:56:19 -- host/auth.sh@44 -- # digest=sha384 00:22:27.103 00:56:19 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:27.103 00:56:19 -- host/auth.sh@44 -- # keyid=4 00:22:27.103 00:56:19 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:27.103 00:56:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:27.103 00:56:19 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:27.103 00:56:19 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:27.103 00:56:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:22:27.103 00:56:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:27.103 00:56:19 -- host/auth.sh@68 -- # digest=sha384 00:22:27.103 00:56:19 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:27.103 00:56:19 -- host/auth.sh@68 -- # keyid=4 00:22:27.103 00:56:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:27.103 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.103 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:27.104 00:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.104 00:56:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:27.104 00:56:19 -- nvmf/common.sh@717 -- # local ip 00:22:27.104 00:56:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.104 00:56:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.104 00:56:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.104 00:56:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.104 00:56:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:27.362 00:56:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.362 00:56:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:27.362 00:56:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:27.362 00:56:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:27.362 00:56:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:27.362 00:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.362 00:56:19 -- common/autotest_common.sh@10 -- # set +x 00:22:27.362 nvme0n1 00:22:27.619 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.619 00:56:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.619 00:56:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.619 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.619 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.619 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.619 00:56:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.619 00:56:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.619 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.619 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.619 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.619 00:56:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:27.619 00:56:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:27.619 00:56:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:22:27.619 00:56:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:27.619 00:56:20 -- host/auth.sh@44 -- # digest=sha384 00:22:27.619 00:56:20 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:27.619 00:56:20 -- host/auth.sh@44 -- # keyid=0 00:22:27.619 00:56:20 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:27.619 00:56:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:27.619 00:56:20 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:27.619 00:56:20 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:27.619 00:56:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:22:27.619 00:56:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:27.619 00:56:20 -- host/auth.sh@68 -- # digest=sha384 00:22:27.620 00:56:20 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:27.620 00:56:20 -- host/auth.sh@68 -- # keyid=0 00:22:27.620 00:56:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:27.620 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.620 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.620 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.620 00:56:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:27.620 00:56:20 -- nvmf/common.sh@717 -- # local ip 00:22:27.620 00:56:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.620 00:56:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.620 00:56:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.620 00:56:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.620 00:56:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:27.620 00:56:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.620 00:56:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:27.620 00:56:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:27.620 00:56:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:27.620 00:56:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:27.620 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.620 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.878 nvme0n1 00:22:27.878 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.878 00:56:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:27.878 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.878 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.878 00:56:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:27.878 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.878 00:56:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.878 00:56:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:27.878 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.878 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.878 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.878 00:56:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:27.878 00:56:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:22:27.878 00:56:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:27.878 00:56:20 -- host/auth.sh@44 -- # digest=sha384 00:22:27.878 00:56:20 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:27.878 00:56:20 -- host/auth.sh@44 -- # keyid=1 00:22:27.878 00:56:20 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:27.878 00:56:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:27.878 00:56:20 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:27.878 00:56:20 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:27.878 00:56:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:22:27.878 00:56:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:27.878 00:56:20 -- host/auth.sh@68 -- # digest=sha384 00:22:27.878 00:56:20 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:27.878 00:56:20 -- host/auth.sh@68 -- # keyid=1 00:22:27.878 00:56:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:27.878 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:27.878 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:27.878 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:27.878 00:56:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:27.878 00:56:20 -- nvmf/common.sh@717 -- # local ip 00:22:27.878 00:56:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:27.878 00:56:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:27.878 00:56:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:27.878 00:56:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:27.878 00:56:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:27.878 00:56:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:27.878 00:56:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:27.878 00:56:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:27.878 00:56:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.136 00:56:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:28.136 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.136 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:28.393 nvme0n1 00:22:28.393 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.393 00:56:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.393 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.393 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:28.393 00:56:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.393 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.393 00:56:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.393 00:56:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.393 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.393 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:28.393 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.393 00:56:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.393 00:56:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:22:28.393 00:56:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.393 00:56:20 -- host/auth.sh@44 -- # digest=sha384 00:22:28.393 00:56:20 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:28.393 00:56:20 -- host/auth.sh@44 -- # keyid=2 00:22:28.393 00:56:20 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:28.393 00:56:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:28.393 00:56:20 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:28.393 00:56:20 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:28.393 00:56:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:22:28.393 00:56:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.393 00:56:20 -- host/auth.sh@68 -- # digest=sha384 00:22:28.393 00:56:20 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:28.393 00:56:20 -- host/auth.sh@68 -- # keyid=2 00:22:28.393 00:56:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:28.393 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.393 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:28.393 00:56:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.393 00:56:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.393 00:56:20 -- nvmf/common.sh@717 -- # local ip 00:22:28.393 00:56:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.393 00:56:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.393 00:56:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.393 00:56:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.393 00:56:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.393 00:56:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.393 00:56:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.393 00:56:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.393 00:56:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.393 00:56:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:28.393 00:56:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.393 00:56:20 -- common/autotest_common.sh@10 -- # set +x 00:22:28.958 nvme0n1 00:22:28.958 00:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.958 00:56:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:28.958 00:56:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:28.958 00:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.958 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:22:28.958 00:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.958 00:56:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.958 00:56:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:28.958 00:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.958 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:22:28.958 00:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.958 00:56:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:28.958 00:56:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:22:28.958 00:56:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:28.958 00:56:21 -- host/auth.sh@44 -- # digest=sha384 00:22:28.958 00:56:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:28.958 00:56:21 -- host/auth.sh@44 -- # keyid=3 00:22:28.958 00:56:21 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:28.958 00:56:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:28.958 00:56:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:28.958 00:56:21 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:28.958 00:56:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:22:28.958 00:56:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:28.958 00:56:21 -- host/auth.sh@68 -- # digest=sha384 00:22:28.958 00:56:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:28.958 00:56:21 -- host/auth.sh@68 -- # keyid=3 00:22:28.958 00:56:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:28.958 00:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.958 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:22:28.958 00:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:28.958 00:56:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:28.958 00:56:21 -- nvmf/common.sh@717 -- # local ip 00:22:28.958 00:56:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:28.958 00:56:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:28.958 00:56:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.958 00:56:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.958 00:56:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:28.958 00:56:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.958 00:56:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:28.958 00:56:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:28.958 00:56:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:28.958 00:56:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:28.958 00:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:28.958 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.216 nvme0n1 00:22:29.216 00:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.216 00:56:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:29.216 00:56:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.216 00:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.216 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.216 00:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.216 00:56:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.216 00:56:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.216 00:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.216 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.216 00:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.216 00:56:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:29.216 00:56:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:22:29.216 00:56:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:29.216 00:56:21 -- host/auth.sh@44 -- # digest=sha384 00:22:29.216 00:56:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:29.216 00:56:21 -- host/auth.sh@44 -- # keyid=4 00:22:29.216 00:56:21 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:29.216 00:56:21 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:29.216 00:56:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:29.216 00:56:21 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:29.216 00:56:21 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:22:29.216 00:56:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:29.216 00:56:21 -- host/auth.sh@68 -- # digest=sha384 00:22:29.216 00:56:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:29.217 00:56:21 -- host/auth.sh@68 -- # keyid=4 00:22:29.217 00:56:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:29.217 00:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.217 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.217 00:56:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.217 00:56:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:29.217 00:56:21 -- nvmf/common.sh@717 -- # local ip 00:22:29.217 00:56:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:29.217 00:56:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:29.217 00:56:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.217 00:56:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.217 00:56:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:29.217 00:56:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.217 00:56:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:29.217 00:56:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:29.217 00:56:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:29.217 00:56:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:29.217 00:56:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.217 00:56:21 -- common/autotest_common.sh@10 -- # set +x 00:22:29.784 nvme0n1 00:22:29.784 00:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.784 00:56:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:29.784 00:56:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:29.784 00:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.784 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:29.784 00:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.784 00:56:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.784 00:56:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.784 00:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.784 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:29.784 00:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.784 00:56:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:29.784 00:56:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:29.784 00:56:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:22:29.784 00:56:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:29.784 00:56:22 -- host/auth.sh@44 -- # digest=sha384 00:22:29.784 00:56:22 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:29.784 00:56:22 -- host/auth.sh@44 -- # keyid=0 00:22:29.784 00:56:22 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:29.784 00:56:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:29.784 00:56:22 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:29.784 00:56:22 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:29.784 00:56:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:22:29.784 00:56:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:29.784 00:56:22 -- host/auth.sh@68 -- # digest=sha384 00:22:29.784 00:56:22 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:29.784 00:56:22 -- host/auth.sh@68 -- # keyid=0 00:22:29.784 00:56:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:29.784 00:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.784 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:29.784 00:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:29.784 00:56:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:29.784 00:56:22 -- nvmf/common.sh@717 -- # local ip 00:22:29.784 00:56:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:29.784 00:56:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:29.784 00:56:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:29.784 00:56:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:29.784 00:56:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:29.784 00:56:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:29.784 00:56:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:29.784 00:56:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:29.784 00:56:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:29.784 00:56:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:29.784 00:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:29.784 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:30.351 nvme0n1 00:22:30.351 00:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.351 00:56:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:30.351 00:56:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.351 00:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.351 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:30.351 00:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.351 00:56:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.351 00:56:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.351 00:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.351 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:30.351 00:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.351 00:56:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:30.351 00:56:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:22:30.351 00:56:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:30.351 00:56:22 -- host/auth.sh@44 -- # digest=sha384 00:22:30.351 00:56:22 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:30.351 00:56:22 -- host/auth.sh@44 -- # keyid=1 00:22:30.351 00:56:22 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:30.351 00:56:22 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:30.351 00:56:22 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:30.351 00:56:22 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:30.351 00:56:22 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:22:30.351 00:56:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:30.351 00:56:22 -- host/auth.sh@68 -- # digest=sha384 00:22:30.351 00:56:22 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:30.351 00:56:22 -- host/auth.sh@68 -- # keyid=1 00:22:30.351 00:56:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:30.351 00:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.351 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:30.352 00:56:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.352 00:56:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:30.352 00:56:22 -- nvmf/common.sh@717 -- # local ip 00:22:30.352 00:56:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.352 00:56:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.352 00:56:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.352 00:56:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.352 00:56:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:30.352 00:56:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.352 00:56:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:30.352 00:56:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:30.352 00:56:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:30.352 00:56:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:30.352 00:56:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.352 00:56:22 -- common/autotest_common.sh@10 -- # set +x 00:22:30.916 nvme0n1 00:22:30.916 00:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.916 00:56:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:30.916 00:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.916 00:56:23 -- common/autotest_common.sh@10 -- # set +x 00:22:30.916 00:56:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:30.916 00:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.916 00:56:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.916 00:56:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:30.916 00:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.916 00:56:23 -- common/autotest_common.sh@10 -- # set +x 00:22:30.916 00:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.916 00:56:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:30.916 00:56:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:22:30.916 00:56:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:30.916 00:56:23 -- host/auth.sh@44 -- # digest=sha384 00:22:30.916 00:56:23 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:30.916 00:56:23 -- host/auth.sh@44 -- # keyid=2 00:22:30.916 00:56:23 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:30.916 00:56:23 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:30.916 00:56:23 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:30.916 00:56:23 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:30.916 00:56:23 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:22:30.916 00:56:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:30.916 00:56:23 -- host/auth.sh@68 -- # digest=sha384 00:22:30.916 00:56:23 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:30.916 00:56:23 -- host/auth.sh@68 -- # keyid=2 00:22:30.916 00:56:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:30.916 00:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.916 00:56:23 -- common/autotest_common.sh@10 -- # set +x 00:22:30.916 00:56:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:30.916 00:56:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:30.916 00:56:23 -- nvmf/common.sh@717 -- # local ip 00:22:30.916 00:56:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:30.916 00:56:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:30.916 00:56:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:30.916 00:56:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:30.916 00:56:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:30.916 00:56:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:30.916 00:56:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:30.916 00:56:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:30.916 00:56:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:30.916 00:56:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:30.916 00:56:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:30.916 00:56:23 -- common/autotest_common.sh@10 -- # set +x 00:22:31.482 nvme0n1 00:22:31.482 00:56:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.482 00:56:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:31.482 00:56:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:31.482 00:56:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.482 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:31.482 00:56:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.482 00:56:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.741 00:56:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:31.741 00:56:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.741 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:31.741 00:56:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.741 00:56:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:31.741 00:56:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:22:31.741 00:56:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:31.741 00:56:24 -- host/auth.sh@44 -- # digest=sha384 00:22:31.741 00:56:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:31.741 00:56:24 -- host/auth.sh@44 -- # keyid=3 00:22:31.741 00:56:24 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:31.741 00:56:24 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:31.741 00:56:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:31.741 00:56:24 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:31.741 00:56:24 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:22:31.741 00:56:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:31.741 00:56:24 -- host/auth.sh@68 -- # digest=sha384 00:22:31.741 00:56:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:31.741 00:56:24 -- host/auth.sh@68 -- # keyid=3 00:22:31.741 00:56:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:31.741 00:56:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.741 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:31.741 00:56:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:31.741 00:56:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:31.741 00:56:24 -- nvmf/common.sh@717 -- # local ip 00:22:31.741 00:56:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:31.741 00:56:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:31.741 00:56:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:31.741 00:56:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:31.741 00:56:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:31.741 00:56:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:31.741 00:56:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:31.741 00:56:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:31.741 00:56:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:31.741 00:56:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:31.741 00:56:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:31.741 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:32.307 nvme0n1 00:22:32.307 00:56:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.307 00:56:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.307 00:56:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:32.307 00:56:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.307 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:32.307 00:56:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.307 00:56:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.307 00:56:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.307 00:56:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.307 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:32.307 00:56:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.307 00:56:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:32.307 00:56:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:22:32.307 00:56:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:32.307 00:56:24 -- host/auth.sh@44 -- # digest=sha384 00:22:32.307 00:56:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:32.307 00:56:24 -- host/auth.sh@44 -- # keyid=4 00:22:32.307 00:56:24 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:32.307 00:56:24 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:22:32.307 00:56:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:32.307 00:56:24 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:32.307 00:56:24 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:22:32.308 00:56:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:32.308 00:56:24 -- host/auth.sh@68 -- # digest=sha384 00:22:32.308 00:56:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:32.308 00:56:24 -- host/auth.sh@68 -- # keyid=4 00:22:32.308 00:56:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:32.308 00:56:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.308 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:32.308 00:56:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.308 00:56:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:32.308 00:56:24 -- nvmf/common.sh@717 -- # local ip 00:22:32.308 00:56:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:32.308 00:56:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:32.308 00:56:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.308 00:56:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.308 00:56:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:32.308 00:56:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.308 00:56:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:32.308 00:56:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:32.308 00:56:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:32.308 00:56:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:32.308 00:56:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.308 00:56:24 -- common/autotest_common.sh@10 -- # set +x 00:22:32.875 nvme0n1 00:22:32.875 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.875 00:56:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:32.875 00:56:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:32.875 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.875 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:32.875 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.875 00:56:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.875 00:56:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:32.875 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.875 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:32.875 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.875 00:56:25 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:22:32.875 00:56:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:32.875 00:56:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:32.875 00:56:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:22:32.875 00:56:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:32.875 00:56:25 -- host/auth.sh@44 -- # digest=sha512 00:22:32.875 00:56:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:32.875 00:56:25 -- host/auth.sh@44 -- # keyid=0 00:22:32.875 00:56:25 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:32.875 00:56:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:32.875 00:56:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:32.875 00:56:25 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:32.875 00:56:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:22:32.875 00:56:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:32.875 00:56:25 -- host/auth.sh@68 -- # digest=sha512 00:22:32.875 00:56:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:32.875 00:56:25 -- host/auth.sh@68 -- # keyid=0 00:22:32.875 00:56:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:32.875 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.875 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:32.875 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:32.875 00:56:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:32.875 00:56:25 -- nvmf/common.sh@717 -- # local ip 00:22:32.875 00:56:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:32.875 00:56:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:32.875 00:56:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:32.875 00:56:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:32.875 00:56:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:32.875 00:56:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:32.875 00:56:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:32.875 00:56:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:32.875 00:56:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:32.875 00:56:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:32.875 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:32.875 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.134 nvme0n1 00:22:33.134 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.134 00:56:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.135 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.135 00:56:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:33.135 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.135 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.135 00:56:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.135 00:56:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.135 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.135 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.135 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.135 00:56:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:33.135 00:56:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:22:33.135 00:56:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:33.135 00:56:25 -- host/auth.sh@44 -- # digest=sha512 00:22:33.135 00:56:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.135 00:56:25 -- host/auth.sh@44 -- # keyid=1 00:22:33.135 00:56:25 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:33.135 00:56:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:33.135 00:56:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:33.135 00:56:25 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:33.135 00:56:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:22:33.135 00:56:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:33.135 00:56:25 -- host/auth.sh@68 -- # digest=sha512 00:22:33.135 00:56:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:33.135 00:56:25 -- host/auth.sh@68 -- # keyid=1 00:22:33.135 00:56:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.135 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.135 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.135 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.135 00:56:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:33.135 00:56:25 -- nvmf/common.sh@717 -- # local ip 00:22:33.135 00:56:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.135 00:56:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.135 00:56:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.135 00:56:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.135 00:56:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:33.135 00:56:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.135 00:56:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:33.135 00:56:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:33.135 00:56:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:33.135 00:56:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:33.135 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.135 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.394 nvme0n1 00:22:33.394 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.394 00:56:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:33.394 00:56:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.394 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.394 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.394 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.394 00:56:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.394 00:56:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.394 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.394 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.394 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.394 00:56:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:33.394 00:56:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:22:33.394 00:56:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:33.394 00:56:25 -- host/auth.sh@44 -- # digest=sha512 00:22:33.394 00:56:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.394 00:56:25 -- host/auth.sh@44 -- # keyid=2 00:22:33.394 00:56:25 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:33.394 00:56:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:33.394 00:56:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:33.394 00:56:25 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:33.394 00:56:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:22:33.394 00:56:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:33.394 00:56:25 -- host/auth.sh@68 -- # digest=sha512 00:22:33.394 00:56:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:33.394 00:56:25 -- host/auth.sh@68 -- # keyid=2 00:22:33.394 00:56:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.394 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.394 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.394 00:56:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.394 00:56:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:33.394 00:56:25 -- nvmf/common.sh@717 -- # local ip 00:22:33.394 00:56:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.394 00:56:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.394 00:56:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.394 00:56:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.394 00:56:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:33.394 00:56:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.394 00:56:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:33.394 00:56:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:33.394 00:56:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:33.394 00:56:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:33.394 00:56:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.394 00:56:25 -- common/autotest_common.sh@10 -- # set +x 00:22:33.394 nvme0n1 00:22:33.394 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.394 00:56:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.394 00:56:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:33.394 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.394 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.653 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.653 00:56:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.653 00:56:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.653 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.653 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.653 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.653 00:56:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:33.653 00:56:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:22:33.653 00:56:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:33.653 00:56:26 -- host/auth.sh@44 -- # digest=sha512 00:22:33.653 00:56:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.653 00:56:26 -- host/auth.sh@44 -- # keyid=3 00:22:33.653 00:56:26 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:33.653 00:56:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:33.653 00:56:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:33.653 00:56:26 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:33.653 00:56:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:22:33.653 00:56:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:33.653 00:56:26 -- host/auth.sh@68 -- # digest=sha512 00:22:33.653 00:56:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:33.653 00:56:26 -- host/auth.sh@68 -- # keyid=3 00:22:33.653 00:56:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.653 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.653 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.653 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.653 00:56:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:33.653 00:56:26 -- nvmf/common.sh@717 -- # local ip 00:22:33.653 00:56:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.653 00:56:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.653 00:56:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.653 00:56:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.653 00:56:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:33.653 00:56:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.653 00:56:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:33.653 00:56:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:33.654 00:56:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:33.654 00:56:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:33.654 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.654 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.654 nvme0n1 00:22:33.654 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.654 00:56:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:33.654 00:56:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.654 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.654 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.654 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.654 00:56:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.654 00:56:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.654 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.654 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.654 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.654 00:56:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:33.654 00:56:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:22:33.654 00:56:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:33.654 00:56:26 -- host/auth.sh@44 -- # digest=sha512 00:22:33.654 00:56:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:33.654 00:56:26 -- host/auth.sh@44 -- # keyid=4 00:22:33.654 00:56:26 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:33.654 00:56:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:33.654 00:56:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:33.654 00:56:26 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:33.654 00:56:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:22:33.654 00:56:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:33.654 00:56:26 -- host/auth.sh@68 -- # digest=sha512 00:22:33.654 00:56:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:33.654 00:56:26 -- host/auth.sh@68 -- # keyid=4 00:22:33.654 00:56:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:33.654 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.654 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.912 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.912 00:56:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:33.912 00:56:26 -- nvmf/common.sh@717 -- # local ip 00:22:33.912 00:56:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.912 00:56:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.912 00:56:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.912 00:56:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.912 00:56:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:33.912 00:56:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.912 00:56:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:33.912 00:56:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:33.912 00:56:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:33.912 00:56:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:33.912 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.912 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.912 nvme0n1 00:22:33.912 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.912 00:56:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:33.912 00:56:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:33.912 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.912 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.912 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.912 00:56:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.912 00:56:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:33.912 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.912 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.912 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.912 00:56:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:33.912 00:56:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:33.912 00:56:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:22:33.912 00:56:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:33.912 00:56:26 -- host/auth.sh@44 -- # digest=sha512 00:22:33.912 00:56:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:33.912 00:56:26 -- host/auth.sh@44 -- # keyid=0 00:22:33.912 00:56:26 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:33.912 00:56:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:33.912 00:56:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:33.912 00:56:26 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:33.912 00:56:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:22:33.912 00:56:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:33.912 00:56:26 -- host/auth.sh@68 -- # digest=sha512 00:22:33.912 00:56:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:33.912 00:56:26 -- host/auth.sh@68 -- # keyid=0 00:22:33.912 00:56:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.912 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.912 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:33.912 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:33.912 00:56:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:33.912 00:56:26 -- nvmf/common.sh@717 -- # local ip 00:22:33.912 00:56:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:33.912 00:56:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:33.912 00:56:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:33.912 00:56:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:33.912 00:56:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:33.912 00:56:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:33.912 00:56:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:33.912 00:56:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:33.912 00:56:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:33.912 00:56:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:33.912 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:33.912 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.170 nvme0n1 00:22:34.170 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.170 00:56:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.170 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.170 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.170 00:56:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:34.170 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.170 00:56:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.170 00:56:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.170 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.170 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.170 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.170 00:56:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:34.170 00:56:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:22:34.170 00:56:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:34.170 00:56:26 -- host/auth.sh@44 -- # digest=sha512 00:22:34.170 00:56:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.170 00:56:26 -- host/auth.sh@44 -- # keyid=1 00:22:34.170 00:56:26 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:34.170 00:56:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:34.170 00:56:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:34.170 00:56:26 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:34.170 00:56:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:22:34.170 00:56:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:34.170 00:56:26 -- host/auth.sh@68 -- # digest=sha512 00:22:34.170 00:56:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:34.170 00:56:26 -- host/auth.sh@68 -- # keyid=1 00:22:34.170 00:56:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.170 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.170 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.170 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.170 00:56:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:34.170 00:56:26 -- nvmf/common.sh@717 -- # local ip 00:22:34.170 00:56:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:34.170 00:56:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:34.170 00:56:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.170 00:56:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.170 00:56:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:34.170 00:56:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.170 00:56:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:34.170 00:56:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:34.170 00:56:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:34.170 00:56:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:34.170 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.170 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.429 nvme0n1 00:22:34.429 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.429 00:56:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:34.429 00:56:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.429 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.429 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.429 00:56:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.429 00:56:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.429 00:56:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.429 00:56:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.429 00:56:26 -- common/autotest_common.sh@10 -- # set +x 00:22:34.429 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.429 00:56:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:34.429 00:56:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:22:34.429 00:56:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:34.429 00:56:27 -- host/auth.sh@44 -- # digest=sha512 00:22:34.429 00:56:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.429 00:56:27 -- host/auth.sh@44 -- # keyid=2 00:22:34.429 00:56:27 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:34.429 00:56:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:34.429 00:56:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:34.429 00:56:27 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:34.429 00:56:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:22:34.429 00:56:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:34.429 00:56:27 -- host/auth.sh@68 -- # digest=sha512 00:22:34.429 00:56:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:34.429 00:56:27 -- host/auth.sh@68 -- # keyid=2 00:22:34.429 00:56:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.429 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.429 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.429 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.429 00:56:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:34.429 00:56:27 -- nvmf/common.sh@717 -- # local ip 00:22:34.429 00:56:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:34.429 00:56:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:34.429 00:56:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.429 00:56:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.429 00:56:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:34.429 00:56:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.429 00:56:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:34.429 00:56:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:34.429 00:56:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:34.429 00:56:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:34.429 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.429 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.688 nvme0n1 00:22:34.688 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.688 00:56:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.688 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.688 00:56:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:34.688 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.688 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.688 00:56:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.688 00:56:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.688 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.688 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.688 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.688 00:56:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:34.688 00:56:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:22:34.688 00:56:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:34.688 00:56:27 -- host/auth.sh@44 -- # digest=sha512 00:22:34.688 00:56:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.688 00:56:27 -- host/auth.sh@44 -- # keyid=3 00:22:34.688 00:56:27 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:34.688 00:56:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:34.688 00:56:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:34.688 00:56:27 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:34.688 00:56:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:22:34.688 00:56:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:34.688 00:56:27 -- host/auth.sh@68 -- # digest=sha512 00:22:34.688 00:56:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:34.688 00:56:27 -- host/auth.sh@68 -- # keyid=3 00:22:34.688 00:56:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.688 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.688 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.688 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.688 00:56:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:34.688 00:56:27 -- nvmf/common.sh@717 -- # local ip 00:22:34.688 00:56:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:34.688 00:56:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:34.688 00:56:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.688 00:56:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.688 00:56:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:34.688 00:56:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.688 00:56:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:34.688 00:56:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:34.688 00:56:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:34.688 00:56:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:34.688 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.688 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.946 nvme0n1 00:22:34.946 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.946 00:56:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:34.946 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.946 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.946 00:56:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:34.946 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.946 00:56:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.946 00:56:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:34.946 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.946 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.946 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.946 00:56:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:34.946 00:56:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:22:34.946 00:56:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:34.946 00:56:27 -- host/auth.sh@44 -- # digest=sha512 00:22:34.946 00:56:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:34.946 00:56:27 -- host/auth.sh@44 -- # keyid=4 00:22:34.946 00:56:27 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:34.946 00:56:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:34.946 00:56:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:34.946 00:56:27 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:34.946 00:56:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:22:34.946 00:56:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:34.946 00:56:27 -- host/auth.sh@68 -- # digest=sha512 00:22:34.946 00:56:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:34.946 00:56:27 -- host/auth.sh@68 -- # keyid=4 00:22:34.946 00:56:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:34.946 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.946 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:34.946 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:34.946 00:56:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:34.946 00:56:27 -- nvmf/common.sh@717 -- # local ip 00:22:34.946 00:56:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:34.946 00:56:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:34.946 00:56:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:34.946 00:56:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:34.946 00:56:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:34.946 00:56:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:34.946 00:56:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:34.946 00:56:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:34.946 00:56:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:34.946 00:56:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:34.946 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:34.946 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:35.204 nvme0n1 00:22:35.204 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.204 00:56:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.204 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.204 00:56:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:35.204 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:35.204 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.204 00:56:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.204 00:56:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.204 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.204 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:35.204 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.204 00:56:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:35.204 00:56:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:35.204 00:56:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:22:35.204 00:56:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:35.204 00:56:27 -- host/auth.sh@44 -- # digest=sha512 00:22:35.204 00:56:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.204 00:56:27 -- host/auth.sh@44 -- # keyid=0 00:22:35.204 00:56:27 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:35.204 00:56:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:35.204 00:56:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:35.204 00:56:27 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:35.204 00:56:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:22:35.204 00:56:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:35.204 00:56:27 -- host/auth.sh@68 -- # digest=sha512 00:22:35.205 00:56:27 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:35.205 00:56:27 -- host/auth.sh@68 -- # keyid=0 00:22:35.205 00:56:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:35.205 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.205 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:35.205 00:56:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.205 00:56:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:35.205 00:56:27 -- nvmf/common.sh@717 -- # local ip 00:22:35.205 00:56:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:35.205 00:56:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:35.205 00:56:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.205 00:56:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.205 00:56:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:35.205 00:56:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.205 00:56:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:35.205 00:56:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:35.205 00:56:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:35.205 00:56:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:35.205 00:56:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.205 00:56:27 -- common/autotest_common.sh@10 -- # set +x 00:22:35.463 nvme0n1 00:22:35.463 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.463 00:56:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.463 00:56:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:35.463 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.463 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.463 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.463 00:56:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.463 00:56:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.463 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.463 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.463 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.463 00:56:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:35.463 00:56:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:22:35.463 00:56:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:35.463 00:56:28 -- host/auth.sh@44 -- # digest=sha512 00:22:35.463 00:56:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.463 00:56:28 -- host/auth.sh@44 -- # keyid=1 00:22:35.463 00:56:28 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:35.463 00:56:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:35.463 00:56:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:35.463 00:56:28 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:35.463 00:56:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:22:35.463 00:56:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:35.463 00:56:28 -- host/auth.sh@68 -- # digest=sha512 00:22:35.463 00:56:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:35.463 00:56:28 -- host/auth.sh@68 -- # keyid=1 00:22:35.463 00:56:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:35.463 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.463 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.463 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.463 00:56:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:35.463 00:56:28 -- nvmf/common.sh@717 -- # local ip 00:22:35.463 00:56:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:35.463 00:56:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:35.463 00:56:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.463 00:56:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.463 00:56:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:35.463 00:56:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.463 00:56:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:35.463 00:56:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:35.463 00:56:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:35.463 00:56:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:35.463 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.463 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.721 nvme0n1 00:22:35.721 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.721 00:56:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:35.721 00:56:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.721 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.721 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.721 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.721 00:56:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.721 00:56:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.721 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.721 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.721 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.721 00:56:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:35.721 00:56:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:22:35.721 00:56:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:35.721 00:56:28 -- host/auth.sh@44 -- # digest=sha512 00:22:35.721 00:56:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.721 00:56:28 -- host/auth.sh@44 -- # keyid=2 00:22:35.721 00:56:28 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:35.721 00:56:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:35.721 00:56:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:35.721 00:56:28 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:35.721 00:56:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:22:35.721 00:56:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:35.721 00:56:28 -- host/auth.sh@68 -- # digest=sha512 00:22:35.721 00:56:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:35.721 00:56:28 -- host/auth.sh@68 -- # keyid=2 00:22:35.721 00:56:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:35.721 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.721 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.721 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.721 00:56:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:35.721 00:56:28 -- nvmf/common.sh@717 -- # local ip 00:22:35.721 00:56:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:35.721 00:56:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:35.721 00:56:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:35.721 00:56:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:35.721 00:56:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:35.721 00:56:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:35.721 00:56:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:35.721 00:56:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:35.721 00:56:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:35.721 00:56:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:35.721 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.721 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.980 nvme0n1 00:22:35.980 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.980 00:56:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:35.980 00:56:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:35.980 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.980 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.980 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.980 00:56:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.980 00:56:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.980 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.980 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.980 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.980 00:56:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:35.980 00:56:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:22:35.980 00:56:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:35.980 00:56:28 -- host/auth.sh@44 -- # digest=sha512 00:22:35.980 00:56:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:35.980 00:56:28 -- host/auth.sh@44 -- # keyid=3 00:22:35.980 00:56:28 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:35.980 00:56:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:35.980 00:56:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:35.980 00:56:28 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:35.980 00:56:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:22:35.980 00:56:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:35.980 00:56:28 -- host/auth.sh@68 -- # digest=sha512 00:22:35.980 00:56:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:35.980 00:56:28 -- host/auth.sh@68 -- # keyid=3 00:22:35.980 00:56:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:35.980 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:35.980 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:35.980 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:35.980 00:56:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:35.980 00:56:28 -- nvmf/common.sh@717 -- # local ip 00:22:35.980 00:56:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:36.239 00:56:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:36.239 00:56:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.239 00:56:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.239 00:56:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:36.239 00:56:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.239 00:56:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:36.239 00:56:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:36.239 00:56:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:36.239 00:56:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:36.239 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:36.239 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:36.239 nvme0n1 00:22:36.239 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:36.239 00:56:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.239 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:36.239 00:56:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:36.239 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:36.239 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:36.498 00:56:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.498 00:56:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.498 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:36.498 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:36.498 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:36.498 00:56:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:36.498 00:56:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:22:36.498 00:56:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:36.498 00:56:28 -- host/auth.sh@44 -- # digest=sha512 00:22:36.498 00:56:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:36.498 00:56:28 -- host/auth.sh@44 -- # keyid=4 00:22:36.498 00:56:28 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:36.498 00:56:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:36.498 00:56:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:36.498 00:56:28 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:36.498 00:56:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:22:36.498 00:56:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:36.498 00:56:28 -- host/auth.sh@68 -- # digest=sha512 00:22:36.498 00:56:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:36.498 00:56:28 -- host/auth.sh@68 -- # keyid=4 00:22:36.498 00:56:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:36.498 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:36.498 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:36.498 00:56:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:36.498 00:56:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:36.498 00:56:28 -- nvmf/common.sh@717 -- # local ip 00:22:36.498 00:56:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:36.498 00:56:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:36.498 00:56:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.498 00:56:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.498 00:56:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:36.498 00:56:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.498 00:56:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:36.498 00:56:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:36.498 00:56:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:36.498 00:56:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:36.498 00:56:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:36.498 00:56:28 -- common/autotest_common.sh@10 -- # set +x 00:22:36.757 nvme0n1 00:22:36.757 00:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:36.757 00:56:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:36.757 00:56:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:36.757 00:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:36.757 00:56:29 -- common/autotest_common.sh@10 -- # set +x 00:22:36.757 00:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:36.757 00:56:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.757 00:56:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.757 00:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:36.757 00:56:29 -- common/autotest_common.sh@10 -- # set +x 00:22:36.757 00:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:36.757 00:56:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:36.757 00:56:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:36.757 00:56:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:22:36.757 00:56:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:36.757 00:56:29 -- host/auth.sh@44 -- # digest=sha512 00:22:36.757 00:56:29 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:36.757 00:56:29 -- host/auth.sh@44 -- # keyid=0 00:22:36.757 00:56:29 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:36.757 00:56:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:36.757 00:56:29 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:36.757 00:56:29 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:36.757 00:56:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:22:36.757 00:56:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:36.757 00:56:29 -- host/auth.sh@68 -- # digest=sha512 00:22:36.757 00:56:29 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:36.757 00:56:29 -- host/auth.sh@68 -- # keyid=0 00:22:36.757 00:56:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:36.757 00:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:36.757 00:56:29 -- common/autotest_common.sh@10 -- # set +x 00:22:36.757 00:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:36.757 00:56:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:36.757 00:56:29 -- nvmf/common.sh@717 -- # local ip 00:22:36.757 00:56:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:36.757 00:56:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:36.757 00:56:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:36.757 00:56:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:36.757 00:56:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:36.757 00:56:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:36.757 00:56:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:36.757 00:56:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:36.757 00:56:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:36.757 00:56:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:36.757 00:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:36.757 00:56:29 -- common/autotest_common.sh@10 -- # set +x 00:22:37.016 nvme0n1 00:22:37.016 00:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.016 00:56:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.016 00:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.016 00:56:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:37.016 00:56:29 -- common/autotest_common.sh@10 -- # set +x 00:22:37.016 00:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.016 00:56:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.016 00:56:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.016 00:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.016 00:56:29 -- common/autotest_common.sh@10 -- # set +x 00:22:37.274 00:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.274 00:56:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:37.274 00:56:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:22:37.274 00:56:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:37.274 00:56:29 -- host/auth.sh@44 -- # digest=sha512 00:22:37.274 00:56:29 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.274 00:56:29 -- host/auth.sh@44 -- # keyid=1 00:22:37.274 00:56:29 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:37.274 00:56:29 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:37.274 00:56:29 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:37.274 00:56:29 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:37.274 00:56:29 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:22:37.274 00:56:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:37.274 00:56:29 -- host/auth.sh@68 -- # digest=sha512 00:22:37.274 00:56:29 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:37.274 00:56:29 -- host/auth.sh@68 -- # keyid=1 00:22:37.274 00:56:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.274 00:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.274 00:56:29 -- common/autotest_common.sh@10 -- # set +x 00:22:37.274 00:56:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.274 00:56:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:37.274 00:56:29 -- nvmf/common.sh@717 -- # local ip 00:22:37.274 00:56:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:37.274 00:56:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:37.274 00:56:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.274 00:56:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.274 00:56:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:37.274 00:56:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.274 00:56:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:37.274 00:56:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:37.274 00:56:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:37.274 00:56:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:37.274 00:56:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.274 00:56:29 -- common/autotest_common.sh@10 -- # set +x 00:22:37.533 nvme0n1 00:22:37.533 00:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.533 00:56:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:37.533 00:56:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:37.533 00:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.533 00:56:30 -- common/autotest_common.sh@10 -- # set +x 00:22:37.533 00:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.533 00:56:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.533 00:56:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:37.533 00:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.533 00:56:30 -- common/autotest_common.sh@10 -- # set +x 00:22:37.533 00:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.533 00:56:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:37.533 00:56:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:22:37.533 00:56:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:37.533 00:56:30 -- host/auth.sh@44 -- # digest=sha512 00:22:37.533 00:56:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:37.533 00:56:30 -- host/auth.sh@44 -- # keyid=2 00:22:37.533 00:56:30 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:37.533 00:56:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:37.533 00:56:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:37.533 00:56:30 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:37.533 00:56:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:22:37.533 00:56:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:37.533 00:56:30 -- host/auth.sh@68 -- # digest=sha512 00:22:37.533 00:56:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:37.533 00:56:30 -- host/auth.sh@68 -- # keyid=2 00:22:37.533 00:56:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:37.533 00:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.533 00:56:30 -- common/autotest_common.sh@10 -- # set +x 00:22:37.533 00:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:37.533 00:56:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:37.534 00:56:30 -- nvmf/common.sh@717 -- # local ip 00:22:37.534 00:56:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:37.534 00:56:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:37.534 00:56:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:37.534 00:56:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:37.534 00:56:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:37.534 00:56:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:37.534 00:56:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:37.534 00:56:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:37.534 00:56:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:37.534 00:56:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:37.534 00:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:37.534 00:56:30 -- common/autotest_common.sh@10 -- # set +x 00:22:38.100 nvme0n1 00:22:38.100 00:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.101 00:56:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.101 00:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.101 00:56:30 -- common/autotest_common.sh@10 -- # set +x 00:22:38.101 00:56:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:38.101 00:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.101 00:56:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.101 00:56:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.101 00:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.101 00:56:30 -- common/autotest_common.sh@10 -- # set +x 00:22:38.101 00:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.101 00:56:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:38.101 00:56:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:22:38.101 00:56:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:38.101 00:56:30 -- host/auth.sh@44 -- # digest=sha512 00:22:38.101 00:56:30 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:38.101 00:56:30 -- host/auth.sh@44 -- # keyid=3 00:22:38.101 00:56:30 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:38.101 00:56:30 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:38.101 00:56:30 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:38.101 00:56:30 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:38.101 00:56:30 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:22:38.101 00:56:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:38.101 00:56:30 -- host/auth.sh@68 -- # digest=sha512 00:22:38.101 00:56:30 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:38.101 00:56:30 -- host/auth.sh@68 -- # keyid=3 00:22:38.101 00:56:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:38.101 00:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.101 00:56:30 -- common/autotest_common.sh@10 -- # set +x 00:22:38.101 00:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.101 00:56:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:38.101 00:56:30 -- nvmf/common.sh@717 -- # local ip 00:22:38.101 00:56:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:38.101 00:56:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:38.101 00:56:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.101 00:56:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.101 00:56:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:38.101 00:56:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.101 00:56:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:38.101 00:56:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:38.101 00:56:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:38.101 00:56:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:38.101 00:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.101 00:56:30 -- common/autotest_common.sh@10 -- # set +x 00:22:38.359 nvme0n1 00:22:38.359 00:56:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.359 00:56:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.359 00:56:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:38.359 00:56:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.359 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:22:38.359 00:56:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.359 00:56:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.359 00:56:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.359 00:56:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.359 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:22:38.617 00:56:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.617 00:56:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:38.617 00:56:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:22:38.617 00:56:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:38.617 00:56:31 -- host/auth.sh@44 -- # digest=sha512 00:22:38.617 00:56:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:38.617 00:56:31 -- host/auth.sh@44 -- # keyid=4 00:22:38.617 00:56:31 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:38.617 00:56:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:38.617 00:56:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:38.617 00:56:31 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:38.617 00:56:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:22:38.617 00:56:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:38.617 00:56:31 -- host/auth.sh@68 -- # digest=sha512 00:22:38.617 00:56:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:38.617 00:56:31 -- host/auth.sh@68 -- # keyid=4 00:22:38.618 00:56:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:38.618 00:56:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.618 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:22:38.618 00:56:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.618 00:56:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:38.618 00:56:31 -- nvmf/common.sh@717 -- # local ip 00:22:38.618 00:56:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:38.618 00:56:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:38.618 00:56:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.618 00:56:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.618 00:56:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:38.618 00:56:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.618 00:56:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:38.618 00:56:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:38.618 00:56:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:38.618 00:56:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:38.618 00:56:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.618 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:22:38.876 nvme0n1 00:22:38.876 00:56:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.876 00:56:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:38.876 00:56:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:38.876 00:56:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.876 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:22:38.876 00:56:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.876 00:56:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.876 00:56:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:38.876 00:56:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.876 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:22:38.876 00:56:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.876 00:56:31 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:38.876 00:56:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:38.876 00:56:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:22:38.876 00:56:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:38.876 00:56:31 -- host/auth.sh@44 -- # digest=sha512 00:22:38.876 00:56:31 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:38.876 00:56:31 -- host/auth.sh@44 -- # keyid=0 00:22:38.876 00:56:31 -- host/auth.sh@45 -- # key=DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:38.876 00:56:31 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:38.876 00:56:31 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:38.876 00:56:31 -- host/auth.sh@49 -- # echo DHHC-1:00:ZGVmMmQwNDY1YTVmZjA0MDQxMGVkYWYyYWIzZWY3N2XziXjV: 00:22:38.876 00:56:31 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:22:38.876 00:56:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:38.876 00:56:31 -- host/auth.sh@68 -- # digest=sha512 00:22:38.876 00:56:31 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:38.876 00:56:31 -- host/auth.sh@68 -- # keyid=0 00:22:38.876 00:56:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:38.876 00:56:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.876 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:22:38.876 00:56:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:38.876 00:56:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:38.876 00:56:31 -- nvmf/common.sh@717 -- # local ip 00:22:38.876 00:56:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:38.876 00:56:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:38.876 00:56:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.876 00:56:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.876 00:56:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:38.876 00:56:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:38.876 00:56:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:38.876 00:56:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:38.876 00:56:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:38.876 00:56:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:38.876 00:56:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:38.876 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:22:39.455 nvme0n1 00:22:39.455 00:56:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.455 00:56:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:39.455 00:56:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.455 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:22:39.455 00:56:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:39.455 00:56:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.455 00:56:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.737 00:56:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:39.738 00:56:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.738 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:22:39.738 00:56:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.738 00:56:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:39.738 00:56:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:22:39.738 00:56:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:39.738 00:56:32 -- host/auth.sh@44 -- # digest=sha512 00:22:39.738 00:56:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:39.738 00:56:32 -- host/auth.sh@44 -- # keyid=1 00:22:39.738 00:56:32 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:39.738 00:56:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:39.738 00:56:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:39.738 00:56:32 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:39.738 00:56:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:22:39.738 00:56:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:39.738 00:56:32 -- host/auth.sh@68 -- # digest=sha512 00:22:39.738 00:56:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:39.738 00:56:32 -- host/auth.sh@68 -- # keyid=1 00:22:39.738 00:56:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:39.738 00:56:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.738 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:22:39.738 00:56:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.738 00:56:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:39.738 00:56:32 -- nvmf/common.sh@717 -- # local ip 00:22:39.738 00:56:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:39.738 00:56:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:39.738 00:56:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.738 00:56:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.738 00:56:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:39.738 00:56:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:39.738 00:56:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:39.738 00:56:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:39.738 00:56:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:39.738 00:56:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:39.738 00:56:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.738 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:22:40.317 nvme0n1 00:22:40.317 00:56:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.317 00:56:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:40.317 00:56:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.317 00:56:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.317 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:22:40.317 00:56:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.317 00:56:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.317 00:56:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.317 00:56:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.317 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:22:40.317 00:56:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.317 00:56:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:40.317 00:56:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:22:40.317 00:56:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:40.317 00:56:32 -- host/auth.sh@44 -- # digest=sha512 00:22:40.317 00:56:32 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:40.317 00:56:32 -- host/auth.sh@44 -- # keyid=2 00:22:40.317 00:56:32 -- host/auth.sh@45 -- # key=DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:40.317 00:56:32 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:40.317 00:56:32 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:40.317 00:56:32 -- host/auth.sh@49 -- # echo DHHC-1:01:MTQxYjVmNjcwYjMyMTYxMTIzODg5NDc1MzJlMzU3NGX26K1H: 00:22:40.317 00:56:32 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:22:40.317 00:56:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:40.317 00:56:32 -- host/auth.sh@68 -- # digest=sha512 00:22:40.317 00:56:32 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:40.317 00:56:32 -- host/auth.sh@68 -- # keyid=2 00:22:40.317 00:56:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:40.317 00:56:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.317 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:22:40.317 00:56:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.317 00:56:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:40.317 00:56:32 -- nvmf/common.sh@717 -- # local ip 00:22:40.317 00:56:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:40.317 00:56:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:40.317 00:56:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.317 00:56:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.317 00:56:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:40.317 00:56:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.317 00:56:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:40.318 00:56:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:40.318 00:56:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:40.318 00:56:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:40.318 00:56:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.318 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:22:40.883 nvme0n1 00:22:40.883 00:56:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.883 00:56:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:40.883 00:56:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.883 00:56:33 -- common/autotest_common.sh@10 -- # set +x 00:22:40.883 00:56:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:40.883 00:56:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.883 00:56:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.883 00:56:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:40.883 00:56:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.883 00:56:33 -- common/autotest_common.sh@10 -- # set +x 00:22:40.883 00:56:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.883 00:56:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:40.883 00:56:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:22:40.883 00:56:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:40.883 00:56:33 -- host/auth.sh@44 -- # digest=sha512 00:22:40.883 00:56:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:40.883 00:56:33 -- host/auth.sh@44 -- # keyid=3 00:22:40.883 00:56:33 -- host/auth.sh@45 -- # key=DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:40.883 00:56:33 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:40.883 00:56:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:40.883 00:56:33 -- host/auth.sh@49 -- # echo DHHC-1:02:NDE3NmJiNjAxYjk0MWRmOWJkZGUxNjExYzc3ZGE4YTJkNzBiNTdjYzIzMmM5ZDcxsnF4fA==: 00:22:40.883 00:56:33 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:22:40.883 00:56:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:40.883 00:56:33 -- host/auth.sh@68 -- # digest=sha512 00:22:40.883 00:56:33 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:40.883 00:56:33 -- host/auth.sh@68 -- # keyid=3 00:22:40.884 00:56:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:40.884 00:56:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.884 00:56:33 -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 00:56:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:40.884 00:56:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:40.884 00:56:33 -- nvmf/common.sh@717 -- # local ip 00:22:40.884 00:56:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:40.884 00:56:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:40.884 00:56:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:40.884 00:56:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:40.884 00:56:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:40.884 00:56:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:40.884 00:56:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:40.884 00:56:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:40.884 00:56:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:40.884 00:56:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:40.884 00:56:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:40.884 00:56:33 -- common/autotest_common.sh@10 -- # set +x 00:22:41.450 nvme0n1 00:22:41.450 00:56:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.450 00:56:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:41.450 00:56:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:41.450 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.450 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:41.450 00:56:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.450 00:56:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.450 00:56:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:41.451 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.451 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:41.451 00:56:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.451 00:56:34 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:41.451 00:56:34 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:22:41.451 00:56:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:41.451 00:56:34 -- host/auth.sh@44 -- # digest=sha512 00:22:41.451 00:56:34 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:41.451 00:56:34 -- host/auth.sh@44 -- # keyid=4 00:22:41.451 00:56:34 -- host/auth.sh@45 -- # key=DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:41.451 00:56:34 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:22:41.451 00:56:34 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:41.451 00:56:34 -- host/auth.sh@49 -- # echo DHHC-1:03:NzRlNDA3OTZmNDMyMjZiMDhhNDljN2IyNGYzY2EwOWU1ZmIyNTE3Zjk4YWI0OGE0YTQwMjVlZjJiZDMwMmU4NEbYH/U=: 00:22:41.451 00:56:34 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:22:41.451 00:56:34 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:41.451 00:56:34 -- host/auth.sh@68 -- # digest=sha512 00:22:41.451 00:56:34 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:41.451 00:56:34 -- host/auth.sh@68 -- # keyid=4 00:22:41.451 00:56:34 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:41.451 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.451 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:41.451 00:56:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:41.451 00:56:34 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:41.451 00:56:34 -- nvmf/common.sh@717 -- # local ip 00:22:41.451 00:56:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:41.451 00:56:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:41.451 00:56:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:41.451 00:56:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:41.451 00:56:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:41.451 00:56:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:41.451 00:56:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:41.451 00:56:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:41.451 00:56:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:41.451 00:56:34 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:41.451 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:41.451 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:42.017 nvme0n1 00:22:42.018 00:56:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.018 00:56:34 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.018 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.018 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:42.018 00:56:34 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:42.018 00:56:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.018 00:56:34 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.018 00:56:34 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:42.018 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.018 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:42.018 00:56:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.018 00:56:34 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:42.018 00:56:34 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:42.018 00:56:34 -- host/auth.sh@44 -- # digest=sha256 00:22:42.018 00:56:34 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:42.018 00:56:34 -- host/auth.sh@44 -- # keyid=1 00:22:42.018 00:56:34 -- host/auth.sh@45 -- # key=DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:42.018 00:56:34 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:42.018 00:56:34 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:42.018 00:56:34 -- host/auth.sh@49 -- # echo DHHC-1:00:NGIwY2M4MWJjNzkyYjJmMDU4NGE5ZWViZTdjMjZmOWY3ZWIzZGYwZmUxMjI1Y2RiXchVrQ==: 00:22:42.018 00:56:34 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:42.018 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.018 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:42.018 00:56:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.018 00:56:34 -- host/auth.sh@119 -- # get_main_ns_ip 00:22:42.018 00:56:34 -- nvmf/common.sh@717 -- # local ip 00:22:42.018 00:56:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:42.018 00:56:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:42.018 00:56:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.018 00:56:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.018 00:56:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:42.018 00:56:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.018 00:56:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:42.018 00:56:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:42.018 00:56:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:42.018 00:56:34 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:42.018 00:56:34 -- common/autotest_common.sh@638 -- # local es=0 00:22:42.018 00:56:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:42.018 00:56:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:42.018 00:56:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:42.018 00:56:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:42.018 00:56:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:42.018 00:56:34 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:22:42.018 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.018 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:42.276 request: 00:22:42.276 { 00:22:42.276 "name": "nvme0", 00:22:42.276 "trtype": "tcp", 00:22:42.276 "traddr": "10.0.0.1", 00:22:42.276 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:42.276 "adrfam": "ipv4", 00:22:42.276 "trsvcid": "4420", 00:22:42.276 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:42.276 "method": "bdev_nvme_attach_controller", 00:22:42.276 "req_id": 1 00:22:42.276 } 00:22:42.276 Got JSON-RPC error response 00:22:42.276 response: 00:22:42.276 { 00:22:42.276 "code": -32602, 00:22:42.276 "message": "Invalid parameters" 00:22:42.276 } 00:22:42.276 00:56:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:42.276 00:56:34 -- common/autotest_common.sh@641 -- # es=1 00:22:42.276 00:56:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:42.276 00:56:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:42.276 00:56:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:42.276 00:56:34 -- host/auth.sh@121 -- # jq length 00:22:42.276 00:56:34 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.276 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.276 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:42.276 00:56:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.276 00:56:34 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:22:42.276 00:56:34 -- host/auth.sh@124 -- # get_main_ns_ip 00:22:42.276 00:56:34 -- nvmf/common.sh@717 -- # local ip 00:22:42.276 00:56:34 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:42.276 00:56:34 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:42.276 00:56:34 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:42.276 00:56:34 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:42.276 00:56:34 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:22:42.276 00:56:34 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:42.276 00:56:34 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:22:42.276 00:56:34 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:22:42.276 00:56:34 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:22:42.276 00:56:34 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:42.276 00:56:34 -- common/autotest_common.sh@638 -- # local es=0 00:22:42.276 00:56:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:42.276 00:56:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:42.276 00:56:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:42.276 00:56:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:42.276 00:56:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:42.276 00:56:34 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:42.276 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.276 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:42.276 request: 00:22:42.276 { 00:22:42.276 "name": "nvme0", 00:22:42.276 "trtype": "tcp", 00:22:42.276 "traddr": "10.0.0.1", 00:22:42.276 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:22:42.276 "adrfam": "ipv4", 00:22:42.276 "trsvcid": "4420", 00:22:42.276 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:22:42.276 "dhchap_key": "key2", 00:22:42.276 "method": "bdev_nvme_attach_controller", 00:22:42.276 "req_id": 1 00:22:42.276 } 00:22:42.276 Got JSON-RPC error response 00:22:42.276 response: 00:22:42.276 { 00:22:42.276 "code": -32602, 00:22:42.276 "message": "Invalid parameters" 00:22:42.276 } 00:22:42.276 00:56:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:42.276 00:56:34 -- common/autotest_common.sh@641 -- # es=1 00:22:42.276 00:56:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:42.276 00:56:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:42.276 00:56:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:42.276 00:56:34 -- host/auth.sh@127 -- # jq length 00:22:42.276 00:56:34 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:22:42.276 00:56:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:42.276 00:56:34 -- common/autotest_common.sh@10 -- # set +x 00:22:42.276 00:56:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:42.276 00:56:34 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:22:42.276 00:56:34 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:22:42.276 00:56:34 -- host/auth.sh@130 -- # cleanup 00:22:42.276 00:56:34 -- host/auth.sh@24 -- # nvmftestfini 00:22:42.276 00:56:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:42.276 00:56:34 -- nvmf/common.sh@117 -- # sync 00:22:42.276 00:56:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.276 00:56:34 -- nvmf/common.sh@120 -- # set +e 00:22:42.276 00:56:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.276 00:56:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.276 rmmod nvme_tcp 00:22:42.276 rmmod nvme_fabrics 00:22:42.276 00:56:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.276 00:56:34 -- nvmf/common.sh@124 -- # set -e 00:22:42.276 00:56:34 -- nvmf/common.sh@125 -- # return 0 00:22:42.276 00:56:34 -- nvmf/common.sh@478 -- # '[' -n 1786545 ']' 00:22:42.276 00:56:34 -- nvmf/common.sh@479 -- # killprocess 1786545 00:22:42.276 00:56:34 -- common/autotest_common.sh@936 -- # '[' -z 1786545 ']' 00:22:42.276 00:56:34 -- common/autotest_common.sh@940 -- # kill -0 1786545 00:22:42.276 00:56:34 -- common/autotest_common.sh@941 -- # uname 00:22:42.276 00:56:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:42.276 00:56:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1786545 00:22:42.535 00:56:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:42.535 00:56:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:42.535 00:56:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1786545' 00:22:42.535 killing process with pid 1786545 00:22:42.535 00:56:34 -- common/autotest_common.sh@955 -- # kill 1786545 00:22:42.535 00:56:34 -- common/autotest_common.sh@960 -- # wait 1786545 00:22:42.535 00:56:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:42.535 00:56:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:42.535 00:56:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:42.535 00:56:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.535 00:56:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.535 00:56:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.535 00:56:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.535 00:56:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.072 00:56:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:45.072 00:56:37 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:45.072 00:56:37 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:45.072 00:56:37 -- host/auth.sh@27 -- # clean_kernel_target 00:22:45.072 00:56:37 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:22:45.072 00:56:37 -- nvmf/common.sh@675 -- # echo 0 00:22:45.072 00:56:37 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:45.072 00:56:37 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:45.072 00:56:37 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:45.072 00:56:37 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:45.072 00:56:37 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:22:45.072 00:56:37 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:22:45.072 00:56:37 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:46.977 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:46.977 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:47.236 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:47.236 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:47.236 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:47.236 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:48.172 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:22:48.172 00:56:40 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rXq /tmp/spdk.key-null.X0F /tmp/spdk.key-sha256.EYc /tmp/spdk.key-sha384.NDZ /tmp/spdk.key-sha512.a60 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:22:48.172 00:56:40 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:50.704 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:22:50.704 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:22:50.704 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:22:50.704 00:22:50.704 real 0m48.291s 00:22:50.704 user 0m42.514s 00:22:50.704 sys 0m11.450s 00:22:50.704 00:56:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:50.704 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:50.704 ************************************ 00:22:50.704 END TEST nvmf_auth 00:22:50.704 ************************************ 00:22:50.704 00:56:43 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:22:50.704 00:56:43 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:50.704 00:56:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:50.704 00:56:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:50.704 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:50.964 ************************************ 00:22:50.964 START TEST nvmf_digest 00:22:50.964 ************************************ 00:22:50.964 00:56:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:50.964 * Looking for test storage... 00:22:50.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.964 00:56:43 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.964 00:56:43 -- nvmf/common.sh@7 -- # uname -s 00:22:50.964 00:56:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.964 00:56:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.964 00:56:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.964 00:56:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.964 00:56:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.964 00:56:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.964 00:56:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.964 00:56:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.964 00:56:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.964 00:56:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.964 00:56:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.964 00:56:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.964 00:56:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.964 00:56:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.964 00:56:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.964 00:56:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.964 00:56:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.964 00:56:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.964 00:56:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.964 00:56:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.964 00:56:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.964 00:56:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.964 00:56:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.964 00:56:43 -- paths/export.sh@5 -- # export PATH 00:22:50.964 00:56:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.964 00:56:43 -- nvmf/common.sh@47 -- # : 0 00:22:50.964 00:56:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.964 00:56:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.964 00:56:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.964 00:56:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.964 00:56:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.964 00:56:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.964 00:56:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.964 00:56:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.964 00:56:43 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:50.964 00:56:43 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:50.964 00:56:43 -- host/digest.sh@16 -- # runtime=2 00:22:50.964 00:56:43 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:22:50.964 00:56:43 -- host/digest.sh@138 -- # nvmftestinit 00:22:50.964 00:56:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:50.964 00:56:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.964 00:56:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:50.964 00:56:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:50.964 00:56:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:50.964 00:56:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.964 00:56:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.964 00:56:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.964 00:56:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:50.964 00:56:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:50.964 00:56:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.964 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:56.234 00:56:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:56.234 00:56:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.234 00:56:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.234 00:56:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.234 00:56:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.234 00:56:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.234 00:56:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.234 00:56:48 -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.234 00:56:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.234 00:56:48 -- nvmf/common.sh@296 -- # e810=() 00:22:56.234 00:56:48 -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.234 00:56:48 -- nvmf/common.sh@297 -- # x722=() 00:22:56.234 00:56:48 -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.234 00:56:48 -- nvmf/common.sh@298 -- # mlx=() 00:22:56.234 00:56:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.234 00:56:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.234 00:56:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.234 00:56:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:56.234 00:56:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:56.234 00:56:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:56.234 00:56:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:56.234 00:56:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.234 00:56:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.235 00:56:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:56.235 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:56.235 00:56:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.235 00:56:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:56.235 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:56.235 00:56:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.235 00:56:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.235 00:56:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.235 00:56:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:56.235 00:56:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.235 00:56:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:56.235 Found net devices under 0000:86:00.0: cvl_0_0 00:22:56.235 00:56:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.235 00:56:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.235 00:56:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.235 00:56:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:56.235 00:56:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.235 00:56:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:56.235 Found net devices under 0000:86:00.1: cvl_0_1 00:22:56.235 00:56:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.235 00:56:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:56.235 00:56:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:56.235 00:56:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:56.235 00:56:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.235 00:56:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.235 00:56:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.235 00:56:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:56.235 00:56:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.235 00:56:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.235 00:56:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:56.235 00:56:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.235 00:56:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.235 00:56:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:56.235 00:56:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:56.235 00:56:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.235 00:56:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.235 00:56:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.235 00:56:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.235 00:56:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:56.235 00:56:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.235 00:56:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.235 00:56:48 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.235 00:56:48 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:56.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:22:56.235 00:22:56.235 --- 10.0.0.2 ping statistics --- 00:22:56.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.235 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:56.235 00:56:48 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:22:56.235 00:22:56.235 --- 10.0.0.1 ping statistics --- 00:22:56.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.235 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:22:56.235 00:56:48 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.235 00:56:48 -- nvmf/common.sh@411 -- # return 0 00:22:56.235 00:56:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:56.235 00:56:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.235 00:56:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:56.235 00:56:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.235 00:56:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:56.235 00:56:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:56.235 00:56:48 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:56.235 00:56:48 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:22:56.235 00:56:48 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:22:56.235 00:56:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:56.235 00:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:56.235 00:56:48 -- common/autotest_common.sh@10 -- # set +x 00:22:56.494 ************************************ 00:22:56.494 START TEST nvmf_digest_clean 00:22:56.494 ************************************ 00:22:56.494 00:56:49 -- common/autotest_common.sh@1111 -- # run_digest 00:22:56.494 00:56:49 -- host/digest.sh@120 -- # local dsa_initiator 00:22:56.494 00:56:49 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:22:56.494 00:56:49 -- host/digest.sh@121 -- # dsa_initiator=false 00:22:56.494 00:56:49 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:22:56.494 00:56:49 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:22:56.494 00:56:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:56.494 00:56:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:56.494 00:56:49 -- common/autotest_common.sh@10 -- # set +x 00:22:56.494 00:56:49 -- nvmf/common.sh@470 -- # nvmfpid=1799362 00:22:56.494 00:56:49 -- nvmf/common.sh@471 -- # waitforlisten 1799362 00:22:56.494 00:56:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:56.494 00:56:49 -- common/autotest_common.sh@817 -- # '[' -z 1799362 ']' 00:22:56.494 00:56:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.494 00:56:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:56.494 00:56:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.494 00:56:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:56.494 00:56:49 -- common/autotest_common.sh@10 -- # set +x 00:22:56.494 [2024-04-27 00:56:49.065035] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:22:56.494 [2024-04-27 00:56:49.065082] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.494 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.494 [2024-04-27 00:56:49.121895] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.751 [2024-04-27 00:56:49.200324] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.751 [2024-04-27 00:56:49.200357] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.751 [2024-04-27 00:56:49.200365] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.751 [2024-04-27 00:56:49.200371] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.752 [2024-04-27 00:56:49.200376] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.752 [2024-04-27 00:56:49.200396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.319 00:56:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:57.319 00:56:49 -- common/autotest_common.sh@850 -- # return 0 00:22:57.319 00:56:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:57.319 00:56:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:57.319 00:56:49 -- common/autotest_common.sh@10 -- # set +x 00:22:57.319 00:56:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.319 00:56:49 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:22:57.319 00:56:49 -- host/digest.sh@126 -- # common_target_config 00:22:57.319 00:56:49 -- host/digest.sh@43 -- # rpc_cmd 00:22:57.319 00:56:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.319 00:56:49 -- common/autotest_common.sh@10 -- # set +x 00:22:57.319 null0 00:22:57.319 [2024-04-27 00:56:49.987534] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.319 [2024-04-27 00:56:50.011727] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.578 00:56:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.578 00:56:50 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:22:57.578 00:56:50 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:57.578 00:56:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:57.578 00:56:50 -- host/digest.sh@80 -- # rw=randread 00:22:57.578 00:56:50 -- host/digest.sh@80 -- # bs=4096 00:22:57.578 00:56:50 -- host/digest.sh@80 -- # qd=128 00:22:57.578 00:56:50 -- host/digest.sh@80 -- # scan_dsa=false 00:22:57.578 00:56:50 -- host/digest.sh@83 -- # bperfpid=1799603 00:22:57.578 00:56:50 -- host/digest.sh@84 -- # waitforlisten 1799603 /var/tmp/bperf.sock 00:22:57.578 00:56:50 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:57.578 00:56:50 -- common/autotest_common.sh@817 -- # '[' -z 1799603 ']' 00:22:57.578 00:56:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:57.578 00:56:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:57.578 00:56:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:57.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:57.578 00:56:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:57.578 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:22:57.578 [2024-04-27 00:56:50.060549] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:22:57.578 [2024-04-27 00:56:50.060597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1799603 ] 00:22:57.578 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.578 [2024-04-27 00:56:50.115411] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.578 [2024-04-27 00:56:50.191889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.513 00:56:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:58.513 00:56:50 -- common/autotest_common.sh@850 -- # return 0 00:22:58.513 00:56:50 -- host/digest.sh@86 -- # false 00:22:58.513 00:56:50 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:58.513 00:56:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:58.513 00:56:51 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:58.513 00:56:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:58.771 nvme0n1 00:22:58.771 00:56:51 -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:58.771 00:56:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:58.771 Running I/O for 2 seconds... 00:23:01.303 00:23:01.303 Latency(us) 00:23:01.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.303 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:01.303 nvme0n1 : 2.00 25961.68 101.41 0.00 0.00 4925.27 2236.77 23820.91 00:23:01.303 =================================================================================================================== 00:23:01.303 Total : 25961.68 101.41 0.00 0.00 4925.27 2236.77 23820.91 00:23:01.303 0 00:23:01.303 00:56:53 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:01.303 00:56:53 -- host/digest.sh@93 -- # get_accel_stats 00:23:01.303 00:56:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:01.303 00:56:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:01.303 | select(.opcode=="crc32c") 00:23:01.303 | "\(.module_name) \(.executed)"' 00:23:01.303 00:56:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:01.303 00:56:53 -- host/digest.sh@94 -- # false 00:23:01.303 00:56:53 -- host/digest.sh@94 -- # exp_module=software 00:23:01.303 00:56:53 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:01.303 00:56:53 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:01.304 00:56:53 -- host/digest.sh@98 -- # killprocess 1799603 00:23:01.304 00:56:53 -- common/autotest_common.sh@936 -- # '[' -z 1799603 ']' 00:23:01.304 00:56:53 -- common/autotest_common.sh@940 -- # kill -0 1799603 00:23:01.304 00:56:53 -- common/autotest_common.sh@941 -- # uname 00:23:01.304 00:56:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:01.304 00:56:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1799603 00:23:01.304 00:56:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:01.304 00:56:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:01.304 00:56:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1799603' 00:23:01.304 killing process with pid 1799603 00:23:01.304 00:56:53 -- common/autotest_common.sh@955 -- # kill 1799603 00:23:01.304 Received shutdown signal, test time was about 2.000000 seconds 00:23:01.304 00:23:01.304 Latency(us) 00:23:01.304 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.304 =================================================================================================================== 00:23:01.304 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.304 00:56:53 -- common/autotest_common.sh@960 -- # wait 1799603 00:23:01.304 00:56:53 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:01.304 00:56:53 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:01.304 00:56:53 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:01.304 00:56:53 -- host/digest.sh@80 -- # rw=randread 00:23:01.304 00:56:53 -- host/digest.sh@80 -- # bs=131072 00:23:01.304 00:56:53 -- host/digest.sh@80 -- # qd=16 00:23:01.304 00:56:53 -- host/digest.sh@80 -- # scan_dsa=false 00:23:01.304 00:56:53 -- host/digest.sh@83 -- # bperfpid=1800295 00:23:01.304 00:56:53 -- host/digest.sh@84 -- # waitforlisten 1800295 /var/tmp/bperf.sock 00:23:01.304 00:56:53 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:01.304 00:56:53 -- common/autotest_common.sh@817 -- # '[' -z 1800295 ']' 00:23:01.304 00:56:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:01.304 00:56:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:01.304 00:56:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:01.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:01.304 00:56:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:01.304 00:56:53 -- common/autotest_common.sh@10 -- # set +x 00:23:01.304 [2024-04-27 00:56:53.930197] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:01.304 [2024-04-27 00:56:53.930244] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800295 ] 00:23:01.304 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:01.304 Zero copy mechanism will not be used. 00:23:01.304 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.304 [2024-04-27 00:56:53.984155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.563 [2024-04-27 00:56:54.051824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.129 00:56:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:02.129 00:56:54 -- common/autotest_common.sh@850 -- # return 0 00:23:02.129 00:56:54 -- host/digest.sh@86 -- # false 00:23:02.129 00:56:54 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:02.129 00:56:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:02.388 00:56:54 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.388 00:56:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:02.645 nvme0n1 00:23:02.904 00:56:55 -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:02.904 00:56:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:02.904 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:02.904 Zero copy mechanism will not be used. 00:23:02.904 Running I/O for 2 seconds... 00:23:04.804 00:23:04.804 Latency(us) 00:23:04.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:04.804 nvme0n1 : 2.00 2406.48 300.81 0.00 0.00 6645.65 5841.25 17438.27 00:23:04.804 =================================================================================================================== 00:23:04.804 Total : 2406.48 300.81 0.00 0.00 6645.65 5841.25 17438.27 00:23:04.804 0 00:23:04.804 00:56:57 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:04.804 00:56:57 -- host/digest.sh@93 -- # get_accel_stats 00:23:04.804 00:56:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:04.804 00:56:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:04.804 | select(.opcode=="crc32c") 00:23:04.804 | "\(.module_name) \(.executed)"' 00:23:04.804 00:56:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:05.062 00:56:57 -- host/digest.sh@94 -- # false 00:23:05.062 00:56:57 -- host/digest.sh@94 -- # exp_module=software 00:23:05.062 00:56:57 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:05.062 00:56:57 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:05.062 00:56:57 -- host/digest.sh@98 -- # killprocess 1800295 00:23:05.062 00:56:57 -- common/autotest_common.sh@936 -- # '[' -z 1800295 ']' 00:23:05.062 00:56:57 -- common/autotest_common.sh@940 -- # kill -0 1800295 00:23:05.062 00:56:57 -- common/autotest_common.sh@941 -- # uname 00:23:05.063 00:56:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.063 00:56:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1800295 00:23:05.063 00:56:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:05.063 00:56:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:05.063 00:56:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1800295' 00:23:05.063 killing process with pid 1800295 00:23:05.063 00:56:57 -- common/autotest_common.sh@955 -- # kill 1800295 00:23:05.063 Received shutdown signal, test time was about 2.000000 seconds 00:23:05.063 00:23:05.063 Latency(us) 00:23:05.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.063 =================================================================================================================== 00:23:05.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.063 00:56:57 -- common/autotest_common.sh@960 -- # wait 1800295 00:23:05.322 00:56:57 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:05.322 00:56:57 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:05.322 00:56:57 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:05.322 00:56:57 -- host/digest.sh@80 -- # rw=randwrite 00:23:05.322 00:56:57 -- host/digest.sh@80 -- # bs=4096 00:23:05.322 00:56:57 -- host/digest.sh@80 -- # qd=128 00:23:05.322 00:56:57 -- host/digest.sh@80 -- # scan_dsa=false 00:23:05.322 00:56:57 -- host/digest.sh@83 -- # bperfpid=1800992 00:23:05.322 00:56:57 -- host/digest.sh@84 -- # waitforlisten 1800992 /var/tmp/bperf.sock 00:23:05.322 00:56:57 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:05.322 00:56:57 -- common/autotest_common.sh@817 -- # '[' -z 1800992 ']' 00:23:05.322 00:56:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:05.322 00:56:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:05.322 00:56:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:05.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:05.322 00:56:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:05.322 00:56:57 -- common/autotest_common.sh@10 -- # set +x 00:23:05.322 [2024-04-27 00:56:57.931178] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:05.322 [2024-04-27 00:56:57.931226] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800992 ] 00:23:05.322 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.322 [2024-04-27 00:56:57.985005] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.580 [2024-04-27 00:56:58.055459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.147 00:56:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.147 00:56:58 -- common/autotest_common.sh@850 -- # return 0 00:23:06.147 00:56:58 -- host/digest.sh@86 -- # false 00:23:06.147 00:56:58 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:06.147 00:56:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:06.406 00:56:58 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:06.406 00:56:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:06.664 nvme0n1 00:23:06.664 00:56:59 -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:06.664 00:56:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:06.923 Running I/O for 2 seconds... 00:23:08.823 00:23:08.823 Latency(us) 00:23:08.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.823 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:08.823 nvme0n1 : 2.00 25523.22 99.70 0.00 0.00 5005.99 3134.33 22225.25 00:23:08.823 =================================================================================================================== 00:23:08.823 Total : 25523.22 99.70 0.00 0.00 5005.99 3134.33 22225.25 00:23:08.823 0 00:23:08.823 00:57:01 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:08.823 00:57:01 -- host/digest.sh@93 -- # get_accel_stats 00:23:08.823 00:57:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:08.823 00:57:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:08.823 | select(.opcode=="crc32c") 00:23:08.823 | "\(.module_name) \(.executed)"' 00:23:08.823 00:57:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:09.082 00:57:01 -- host/digest.sh@94 -- # false 00:23:09.082 00:57:01 -- host/digest.sh@94 -- # exp_module=software 00:23:09.082 00:57:01 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:09.082 00:57:01 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:09.082 00:57:01 -- host/digest.sh@98 -- # killprocess 1800992 00:23:09.082 00:57:01 -- common/autotest_common.sh@936 -- # '[' -z 1800992 ']' 00:23:09.082 00:57:01 -- common/autotest_common.sh@940 -- # kill -0 1800992 00:23:09.082 00:57:01 -- common/autotest_common.sh@941 -- # uname 00:23:09.082 00:57:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:09.082 00:57:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1800992 00:23:09.082 00:57:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:09.082 00:57:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:09.082 00:57:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1800992' 00:23:09.082 killing process with pid 1800992 00:23:09.082 00:57:01 -- common/autotest_common.sh@955 -- # kill 1800992 00:23:09.082 Received shutdown signal, test time was about 2.000000 seconds 00:23:09.082 00:23:09.082 Latency(us) 00:23:09.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.082 =================================================================================================================== 00:23:09.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:09.082 00:57:01 -- common/autotest_common.sh@960 -- # wait 1800992 00:23:09.341 00:57:01 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:09.341 00:57:01 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:09.341 00:57:01 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:09.341 00:57:01 -- host/digest.sh@80 -- # rw=randwrite 00:23:09.341 00:57:01 -- host/digest.sh@80 -- # bs=131072 00:23:09.341 00:57:01 -- host/digest.sh@80 -- # qd=16 00:23:09.341 00:57:01 -- host/digest.sh@80 -- # scan_dsa=false 00:23:09.341 00:57:01 -- host/digest.sh@83 -- # bperfpid=1801698 00:23:09.341 00:57:01 -- host/digest.sh@84 -- # waitforlisten 1801698 /var/tmp/bperf.sock 00:23:09.341 00:57:01 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:09.341 00:57:01 -- common/autotest_common.sh@817 -- # '[' -z 1801698 ']' 00:23:09.341 00:57:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:09.341 00:57:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:09.341 00:57:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:09.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:09.341 00:57:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:09.341 00:57:01 -- common/autotest_common.sh@10 -- # set +x 00:23:09.341 [2024-04-27 00:57:01.928691] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:09.341 [2024-04-27 00:57:01.928739] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801698 ] 00:23:09.341 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:09.341 Zero copy mechanism will not be used. 00:23:09.341 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.341 [2024-04-27 00:57:01.983207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.599 [2024-04-27 00:57:02.062675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.165 00:57:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:10.165 00:57:02 -- common/autotest_common.sh@850 -- # return 0 00:23:10.165 00:57:02 -- host/digest.sh@86 -- # false 00:23:10.165 00:57:02 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:10.165 00:57:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:10.421 00:57:02 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:10.421 00:57:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:10.678 nvme0n1 00:23:10.678 00:57:03 -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:10.678 00:57:03 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:10.968 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:10.968 Zero copy mechanism will not be used. 00:23:10.968 Running I/O for 2 seconds... 00:23:12.870 00:23:12.870 Latency(us) 00:23:12.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.870 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:12.870 nvme0n1 : 2.01 1692.67 211.58 0.00 0.00 9430.50 7009.50 36244.26 00:23:12.870 =================================================================================================================== 00:23:12.870 Total : 1692.67 211.58 0.00 0.00 9430.50 7009.50 36244.26 00:23:12.870 0 00:23:12.870 00:57:05 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:12.870 00:57:05 -- host/digest.sh@93 -- # get_accel_stats 00:23:12.870 00:57:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:12.870 00:57:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:12.870 | select(.opcode=="crc32c") 00:23:12.870 | "\(.module_name) \(.executed)"' 00:23:12.870 00:57:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:13.130 00:57:05 -- host/digest.sh@94 -- # false 00:23:13.130 00:57:05 -- host/digest.sh@94 -- # exp_module=software 00:23:13.130 00:57:05 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:13.130 00:57:05 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:13.130 00:57:05 -- host/digest.sh@98 -- # killprocess 1801698 00:23:13.130 00:57:05 -- common/autotest_common.sh@936 -- # '[' -z 1801698 ']' 00:23:13.130 00:57:05 -- common/autotest_common.sh@940 -- # kill -0 1801698 00:23:13.130 00:57:05 -- common/autotest_common.sh@941 -- # uname 00:23:13.130 00:57:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.130 00:57:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1801698 00:23:13.130 00:57:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:13.130 00:57:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:13.130 00:57:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1801698' 00:23:13.130 killing process with pid 1801698 00:23:13.130 00:57:05 -- common/autotest_common.sh@955 -- # kill 1801698 00:23:13.130 Received shutdown signal, test time was about 2.000000 seconds 00:23:13.130 00:23:13.130 Latency(us) 00:23:13.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.130 =================================================================================================================== 00:23:13.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.130 00:57:05 -- common/autotest_common.sh@960 -- # wait 1801698 00:23:13.389 00:57:05 -- host/digest.sh@132 -- # killprocess 1799362 00:23:13.389 00:57:05 -- common/autotest_common.sh@936 -- # '[' -z 1799362 ']' 00:23:13.389 00:57:05 -- common/autotest_common.sh@940 -- # kill -0 1799362 00:23:13.389 00:57:05 -- common/autotest_common.sh@941 -- # uname 00:23:13.389 00:57:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:13.389 00:57:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1799362 00:23:13.389 00:57:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:13.389 00:57:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:13.389 00:57:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1799362' 00:23:13.389 killing process with pid 1799362 00:23:13.389 00:57:05 -- common/autotest_common.sh@955 -- # kill 1799362 00:23:13.389 00:57:05 -- common/autotest_common.sh@960 -- # wait 1799362 00:23:13.647 00:23:13.647 real 0m17.160s 00:23:13.647 user 0m33.857s 00:23:13.647 sys 0m3.488s 00:23:13.647 00:57:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:13.647 00:57:06 -- common/autotest_common.sh@10 -- # set +x 00:23:13.647 ************************************ 00:23:13.647 END TEST nvmf_digest_clean 00:23:13.647 ************************************ 00:23:13.647 00:57:06 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:13.647 00:57:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:13.647 00:57:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:13.647 00:57:06 -- common/autotest_common.sh@10 -- # set +x 00:23:13.907 ************************************ 00:23:13.907 START TEST nvmf_digest_error 00:23:13.907 ************************************ 00:23:13.907 00:57:06 -- common/autotest_common.sh@1111 -- # run_digest_error 00:23:13.907 00:57:06 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:13.907 00:57:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:13.907 00:57:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:13.907 00:57:06 -- common/autotest_common.sh@10 -- # set +x 00:23:13.907 00:57:06 -- nvmf/common.sh@470 -- # nvmfpid=1802869 00:23:13.907 00:57:06 -- nvmf/common.sh@471 -- # waitforlisten 1802869 00:23:13.907 00:57:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:13.907 00:57:06 -- common/autotest_common.sh@817 -- # '[' -z 1802869 ']' 00:23:13.907 00:57:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.907 00:57:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:13.907 00:57:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.907 00:57:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:13.907 00:57:06 -- common/autotest_common.sh@10 -- # set +x 00:23:13.907 [2024-04-27 00:57:06.399151] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:13.907 [2024-04-27 00:57:06.399196] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.907 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.907 [2024-04-27 00:57:06.455985] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.907 [2024-04-27 00:57:06.538288] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.907 [2024-04-27 00:57:06.538322] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.907 [2024-04-27 00:57:06.538329] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.907 [2024-04-27 00:57:06.538336] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.907 [2024-04-27 00:57:06.538342] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.907 [2024-04-27 00:57:06.538360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.528 00:57:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:14.528 00:57:07 -- common/autotest_common.sh@850 -- # return 0 00:23:14.528 00:57:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:14.528 00:57:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:14.528 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:23:14.807 00:57:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.807 00:57:07 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:14.807 00:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.807 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:23:14.807 [2024-04-27 00:57:07.240408] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:14.807 00:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.807 00:57:07 -- host/digest.sh@105 -- # common_target_config 00:23:14.807 00:57:07 -- host/digest.sh@43 -- # rpc_cmd 00:23:14.807 00:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.807 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:23:14.807 null0 00:23:14.807 [2024-04-27 00:57:07.330527] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.807 [2024-04-27 00:57:07.354708] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.807 00:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.807 00:57:07 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:14.807 00:57:07 -- host/digest.sh@54 -- # local rw bs qd 00:23:14.807 00:57:07 -- host/digest.sh@56 -- # rw=randread 00:23:14.807 00:57:07 -- host/digest.sh@56 -- # bs=4096 00:23:14.807 00:57:07 -- host/digest.sh@56 -- # qd=128 00:23:14.807 00:57:07 -- host/digest.sh@58 -- # bperfpid=1802986 00:23:14.807 00:57:07 -- host/digest.sh@60 -- # waitforlisten 1802986 /var/tmp/bperf.sock 00:23:14.807 00:57:07 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:14.807 00:57:07 -- common/autotest_common.sh@817 -- # '[' -z 1802986 ']' 00:23:14.807 00:57:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:14.807 00:57:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:14.807 00:57:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:14.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:14.807 00:57:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:14.807 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:23:14.807 [2024-04-27 00:57:07.402974] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:14.807 [2024-04-27 00:57:07.403017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1802986 ] 00:23:14.807 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.807 [2024-04-27 00:57:07.455775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.066 [2024-04-27 00:57:07.527036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.632 00:57:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:15.632 00:57:08 -- common/autotest_common.sh@850 -- # return 0 00:23:15.632 00:57:08 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:15.633 00:57:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:15.891 00:57:08 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:15.891 00:57:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.891 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:23:15.891 00:57:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.891 00:57:08 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:15.891 00:57:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:16.150 nvme0n1 00:23:16.150 00:57:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:16.150 00:57:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.150 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:23:16.150 00:57:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.150 00:57:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:16.150 00:57:08 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:16.150 Running I/O for 2 seconds... 00:23:16.150 [2024-04-27 00:57:08.816032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.150 [2024-04-27 00:57:08.816067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.150 [2024-04-27 00:57:08.816084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.150 [2024-04-27 00:57:08.827321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.150 [2024-04-27 00:57:08.827346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.150 [2024-04-27 00:57:08.827356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.150 [2024-04-27 00:57:08.836779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.150 [2024-04-27 00:57:08.836803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.150 [2024-04-27 00:57:08.836812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.846939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.846964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.846973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.856644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.856664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.856673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.866265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.866301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.866310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.876844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.876865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.876872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.885412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.885432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.885440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.895765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.895785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.895793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.904523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.904543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.904551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.915253] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.915274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.915282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.923757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.923777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.923785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.934664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.934684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.934692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.943970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.943991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.943999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.953068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.953094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.953102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.963324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.963346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.963354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.409 [2024-04-27 00:57:08.972553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.409 [2024-04-27 00:57:08.972575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.409 [2024-04-27 00:57:08.972583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:08.982288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:08.982309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:08.982318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:08.991109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:08.991130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:08.991139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.001967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.001988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.001997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.010467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.010487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.010502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.021576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.021597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.021605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.030146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.030167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.030175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.040630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.040651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.040659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.049635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.049657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.049665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.060302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.060323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.060331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.069104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.069124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.069132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.078932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.078955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.078963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.088708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.088729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.088737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.410 [2024-04-27 00:57:09.098217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.410 [2024-04-27 00:57:09.098241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.410 [2024-04-27 00:57:09.098249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.669 [2024-04-27 00:57:09.108042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.669 [2024-04-27 00:57:09.108064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.669 [2024-04-27 00:57:09.108079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.669 [2024-04-27 00:57:09.118331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.669 [2024-04-27 00:57:09.118352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.669 [2024-04-27 00:57:09.118360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.669 [2024-04-27 00:57:09.127951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.669 [2024-04-27 00:57:09.127973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.669 [2024-04-27 00:57:09.127980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.669 [2024-04-27 00:57:09.137493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.669 [2024-04-27 00:57:09.137514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.669 [2024-04-27 00:57:09.137522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.669 [2024-04-27 00:57:09.146963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.669 [2024-04-27 00:57:09.146984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.669 [2024-04-27 00:57:09.146992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.669 [2024-04-27 00:57:09.156937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.669 [2024-04-27 00:57:09.156958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.669 [2024-04-27 00:57:09.156966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.166093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.166113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.166121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.177433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.177453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.177461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.186266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.186286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.186294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.195108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.195129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.195137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.204913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.204934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.204942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.215893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.215913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.215921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.225141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.225162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.225171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.235205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.235225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.235233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.244794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.244814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.244823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.253010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.253031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.253040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.263819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.263840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.263851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.272831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.272851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.272859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.283159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.283180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.283188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.292550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.292570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.292579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.301716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.301736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.301744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.311602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.311623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.311631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.321602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.321622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.321630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.330520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.330541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.330549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.340710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.340730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.340739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.350430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.350451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.350459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.670 [2024-04-27 00:57:09.359693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.670 [2024-04-27 00:57:09.359723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.670 [2024-04-27 00:57:09.359736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.369941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.369962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.369969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.380117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.380138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.380146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.389567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.389587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.389595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.398177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.398198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.398205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.408303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.408323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.408331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.417737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.417757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.417765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.427437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.427457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.427468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.436776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.436796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.436804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.446470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.446490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.446498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.456252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.456272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.456280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.465904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.465925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.465932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.475383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.475402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.475411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.484390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.930 [2024-04-27 00:57:09.484410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.930 [2024-04-27 00:57:09.484418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.930 [2024-04-27 00:57:09.494825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.494845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.494854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.504228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.504249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.504257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.514015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.514040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.514048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.522976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.522996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.523004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.532644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.532664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.532672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.542196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.542216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.542224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.555921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.555941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.555950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.566777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.566797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.566805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.575317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.575338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.575346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.585241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.585261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.585269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.600829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.600849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.600857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.609384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.609406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.609414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:16.931 [2024-04-27 00:57:09.620162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:16.931 [2024-04-27 00:57:09.620184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:16.931 [2024-04-27 00:57:09.620192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.632746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.632767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.632775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.644221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.644242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.644250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.653549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.653569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.653578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.662757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.662779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.662787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.673151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.673172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.673180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.681443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.681464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.681472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.693050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.693074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.693086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.703076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.703096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.703104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.711377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.711397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.711405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.725390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.725410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.725418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.737733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.737753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.737762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.746540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.746560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.190 [2024-04-27 00:57:09.746568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.190 [2024-04-27 00:57:09.760359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.190 [2024-04-27 00:57:09.760379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.760387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.769006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.769026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.769034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.778338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.778357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.778365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.792178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.792203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.792211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.800939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.800959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.800967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.810427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.810448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.810456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.822762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.822783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.822791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.832288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.832308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.832316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.841503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.841524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.841532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.855146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.855167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.855175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.867987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.868007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.868015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.191 [2024-04-27 00:57:09.877599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.191 [2024-04-27 00:57:09.877619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.191 [2024-04-27 00:57:09.877626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.450 [2024-04-27 00:57:09.892680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.450 [2024-04-27 00:57:09.892701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.450 [2024-04-27 00:57:09.892709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.450 [2024-04-27 00:57:09.901356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.450 [2024-04-27 00:57:09.901377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.450 [2024-04-27 00:57:09.901385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.450 [2024-04-27 00:57:09.912868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.450 [2024-04-27 00:57:09.912889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.450 [2024-04-27 00:57:09.912897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.450 [2024-04-27 00:57:09.921903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.450 [2024-04-27 00:57:09.921924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.450 [2024-04-27 00:57:09.921933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.450 [2024-04-27 00:57:09.933509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.450 [2024-04-27 00:57:09.933530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.450 [2024-04-27 00:57:09.933537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.450 [2024-04-27 00:57:09.944708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.450 [2024-04-27 00:57:09.944728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.450 [2024-04-27 00:57:09.944736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.450 [2024-04-27 00:57:09.953501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.450 [2024-04-27 00:57:09.953521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.450 [2024-04-27 00:57:09.953529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.450 [2024-04-27 00:57:09.964670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.450 [2024-04-27 00:57:09.964691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.450 [2024-04-27 00:57:09.964698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.450 [2024-04-27 00:57:09.977214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.450 [2024-04-27 00:57:09.977234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:09.977246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:09.987486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:09.987507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:09.987515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:09.997341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:09.997362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:09.997370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.008052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.008085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.008096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.017473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.017495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.017504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.030048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.030078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.030088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.039578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.039600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.039609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.049422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.049443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.049452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.060428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.060450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.060459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.071083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.071104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.071113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.079417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.079438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.079446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.090308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.090329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.090337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.101033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.101055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.101063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.110798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.110819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.110827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.120261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.120283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.120291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.131109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.131131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.131139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.451 [2024-04-27 00:57:10.141403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.451 [2024-04-27 00:57:10.141424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.451 [2024-04-27 00:57:10.141433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.151583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.151604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.151617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.161739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.161761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.161769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.171763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.171784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.171793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.180715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.180736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.180744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.190834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.190856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.190865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.200336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.200358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.200366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.211194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.211216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.211224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.221622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.221643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.221651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.230483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.230505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.230513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.241282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.241306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.241315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.251560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.251581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.251590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.260259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.260279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.260287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.270515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.270536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.270544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.280834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.280854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.280862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.290303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.290324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.290332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.300433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.300455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:19703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.300464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.310036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.310057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.310065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.319237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.319258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.319267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.330442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.330463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.330471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.340188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.340209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.340217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.349576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.349596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.349604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.359782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.710 [2024-04-27 00:57:10.359803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.710 [2024-04-27 00:57:10.359811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.710 [2024-04-27 00:57:10.369139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.711 [2024-04-27 00:57:10.369161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.711 [2024-04-27 00:57:10.369169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.711 [2024-04-27 00:57:10.379355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.711 [2024-04-27 00:57:10.379377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.711 [2024-04-27 00:57:10.379386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.711 [2024-04-27 00:57:10.388973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.711 [2024-04-27 00:57:10.388994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.711 [2024-04-27 00:57:10.389002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.711 [2024-04-27 00:57:10.399352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.711 [2024-04-27 00:57:10.399372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.711 [2024-04-27 00:57:10.399380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.409087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.409108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.409119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.419353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.419375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.419384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.428880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.428901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.428909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.439185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.439205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.439214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.448363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.448384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.448392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.459568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.459589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.459597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.467944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.467965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.467973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.478342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.478363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.478371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.488659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.488680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.488688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.498678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.498701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.498710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.508141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.508161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.508170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.517540] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.517560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.517568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.526661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.526682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.526689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.537126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.537146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.537154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.546191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.546211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.546219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.556103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.556125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.556133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.566312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.566333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.566342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.574932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.574952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.574961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.585432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.585453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.585461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.593758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.593778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.593786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.604406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.604426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.604434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.613308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.613328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.613337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.623309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.623329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.623337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.632655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.632676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.632684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.642983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.643004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.643012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.651335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.651356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.651364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:17.971 [2024-04-27 00:57:10.661625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:17.971 [2024-04-27 00:57:10.661646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.971 [2024-04-27 00:57:10.661658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.230 [2024-04-27 00:57:10.672319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.230 [2024-04-27 00:57:10.672339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.230 [2024-04-27 00:57:10.672347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.230 [2024-04-27 00:57:10.680777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.230 [2024-04-27 00:57:10.680798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.230 [2024-04-27 00:57:10.680806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.230 [2024-04-27 00:57:10.691056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.230 [2024-04-27 00:57:10.691082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.230 [2024-04-27 00:57:10.691091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.230 [2024-04-27 00:57:10.701790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.230 [2024-04-27 00:57:10.701810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.230 [2024-04-27 00:57:10.701818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.230 [2024-04-27 00:57:10.710211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.230 [2024-04-27 00:57:10.710230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.230 [2024-04-27 00:57:10.710239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.230 [2024-04-27 00:57:10.719848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.230 [2024-04-27 00:57:10.719868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.230 [2024-04-27 00:57:10.719876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.230 [2024-04-27 00:57:10.729962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.230 [2024-04-27 00:57:10.729982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.230 [2024-04-27 00:57:10.729990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.230 [2024-04-27 00:57:10.739606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.230 [2024-04-27 00:57:10.739625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.230 [2024-04-27 00:57:10.739633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.230 [2024-04-27 00:57:10.748779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.230 [2024-04-27 00:57:10.748800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.230 [2024-04-27 00:57:10.748808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.230 [2024-04-27 00:57:10.758519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.230 [2024-04-27 00:57:10.758539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.230 [2024-04-27 00:57:10.758547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.231 [2024-04-27 00:57:10.768899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.231 [2024-04-27 00:57:10.768919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.231 [2024-04-27 00:57:10.768926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.231 [2024-04-27 00:57:10.777645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.231 [2024-04-27 00:57:10.777665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.231 [2024-04-27 00:57:10.777673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.231 [2024-04-27 00:57:10.786998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.231 [2024-04-27 00:57:10.787017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.231 [2024-04-27 00:57:10.787025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.231 [2024-04-27 00:57:10.797357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbcd970) 00:23:18.231 [2024-04-27 00:57:10.797378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.231 [2024-04-27 00:57:10.797386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:18.231 00:23:18.231 Latency(us) 00:23:18.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.231 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:18.231 nvme0n1 : 2.00 25428.11 99.33 0.00 0.00 5028.07 2037.31 15728.64 00:23:18.231 =================================================================================================================== 00:23:18.231 Total : 25428.11 99.33 0.00 0.00 5028.07 2037.31 15728.64 00:23:18.231 0 00:23:18.231 00:57:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:18.231 00:57:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:18.231 | .driver_specific 00:23:18.231 | .nvme_error 00:23:18.231 | .status_code 00:23:18.231 | .command_transient_transport_error' 00:23:18.231 00:57:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:18.231 00:57:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:18.489 00:57:10 -- host/digest.sh@71 -- # (( 199 > 0 )) 00:23:18.489 00:57:11 -- host/digest.sh@73 -- # killprocess 1802986 00:23:18.489 00:57:11 -- common/autotest_common.sh@936 -- # '[' -z 1802986 ']' 00:23:18.489 00:57:11 -- common/autotest_common.sh@940 -- # kill -0 1802986 00:23:18.489 00:57:11 -- common/autotest_common.sh@941 -- # uname 00:23:18.489 00:57:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:18.489 00:57:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1802986 00:23:18.489 00:57:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:18.489 00:57:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:18.489 00:57:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1802986' 00:23:18.489 killing process with pid 1802986 00:23:18.489 00:57:11 -- common/autotest_common.sh@955 -- # kill 1802986 00:23:18.489 Received shutdown signal, test time was about 2.000000 seconds 00:23:18.489 00:23:18.490 Latency(us) 00:23:18.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.490 =================================================================================================================== 00:23:18.490 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.490 00:57:11 -- common/autotest_common.sh@960 -- # wait 1802986 00:23:18.748 00:57:11 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:18.748 00:57:11 -- host/digest.sh@54 -- # local rw bs qd 00:23:18.748 00:57:11 -- host/digest.sh@56 -- # rw=randread 00:23:18.748 00:57:11 -- host/digest.sh@56 -- # bs=131072 00:23:18.748 00:57:11 -- host/digest.sh@56 -- # qd=16 00:23:18.748 00:57:11 -- host/digest.sh@58 -- # bperfpid=1803670 00:23:18.748 00:57:11 -- host/digest.sh@60 -- # waitforlisten 1803670 /var/tmp/bperf.sock 00:23:18.748 00:57:11 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:18.748 00:57:11 -- common/autotest_common.sh@817 -- # '[' -z 1803670 ']' 00:23:18.748 00:57:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:18.748 00:57:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:18.748 00:57:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:18.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:18.748 00:57:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:18.748 00:57:11 -- common/autotest_common.sh@10 -- # set +x 00:23:18.748 [2024-04-27 00:57:11.287312] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:18.748 [2024-04-27 00:57:11.287359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803670 ] 00:23:18.748 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:18.748 Zero copy mechanism will not be used. 00:23:18.748 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.748 [2024-04-27 00:57:11.340750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.748 [2024-04-27 00:57:11.417276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.684 00:57:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:19.684 00:57:12 -- common/autotest_common.sh@850 -- # return 0 00:23:19.684 00:57:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:19.684 00:57:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:19.684 00:57:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:19.684 00:57:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.684 00:57:12 -- common/autotest_common.sh@10 -- # set +x 00:23:19.684 00:57:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.684 00:57:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:19.684 00:57:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:19.943 nvme0n1 00:23:19.943 00:57:12 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:19.943 00:57:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:19.943 00:57:12 -- common/autotest_common.sh@10 -- # set +x 00:23:19.943 00:57:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:19.943 00:57:12 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:19.943 00:57:12 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:20.203 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:20.203 Zero copy mechanism will not be used. 00:23:20.203 Running I/O for 2 seconds... 00:23:20.203 [2024-04-27 00:57:12.695050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.203 [2024-04-27 00:57:12.695089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-04-27 00:57:12.695100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.203 [2024-04-27 00:57:12.709066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.203 [2024-04-27 00:57:12.709097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-04-27 00:57:12.709106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.203 [2024-04-27 00:57:12.721333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.203 [2024-04-27 00:57:12.721356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-04-27 00:57:12.721364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.203 [2024-04-27 00:57:12.733281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.203 [2024-04-27 00:57:12.733302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-04-27 00:57:12.733311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.203 [2024-04-27 00:57:12.745387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.203 [2024-04-27 00:57:12.745407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-04-27 00:57:12.745414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.203 [2024-04-27 00:57:12.757420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.203 [2024-04-27 00:57:12.757441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-04-27 00:57:12.757450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.203 [2024-04-27 00:57:12.769251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.203 [2024-04-27 00:57:12.769271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-04-27 00:57:12.769280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.203 [2024-04-27 00:57:12.781085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.203 [2024-04-27 00:57:12.781110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-04-27 00:57:12.781119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.203 [2024-04-27 00:57:12.793050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.203 [2024-04-27 00:57:12.793076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.203 [2024-04-27 00:57:12.793085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.204 [2024-04-27 00:57:12.804880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.204 [2024-04-27 00:57:12.804900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-04-27 00:57:12.804909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.204 [2024-04-27 00:57:12.816730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.204 [2024-04-27 00:57:12.816750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-04-27 00:57:12.816759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.204 [2024-04-27 00:57:12.828599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.204 [2024-04-27 00:57:12.828619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-04-27 00:57:12.828627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.204 [2024-04-27 00:57:12.840471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.204 [2024-04-27 00:57:12.840491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-04-27 00:57:12.840500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.204 [2024-04-27 00:57:12.852322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.204 [2024-04-27 00:57:12.852343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-04-27 00:57:12.852352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.204 [2024-04-27 00:57:12.864177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.204 [2024-04-27 00:57:12.864197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-04-27 00:57:12.864205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.204 [2024-04-27 00:57:12.876064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.204 [2024-04-27 00:57:12.876090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-04-27 00:57:12.876099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.204 [2024-04-27 00:57:12.888055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.204 [2024-04-27 00:57:12.888082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.204 [2024-04-27 00:57:12.888091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.462 [2024-04-27 00:57:12.899978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.462 [2024-04-27 00:57:12.899999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.462 [2024-04-27 00:57:12.900009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:12.911863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:12.911883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:12.911892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:12.923810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:12.923830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:12.923840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:12.935752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:12.935772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:12.935782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:12.947735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:12.947756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:12.947764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:12.959711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:12.959731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:12.959739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:12.971530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:12.971551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:12.971559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:12.983464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:12.983486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:12.983500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:12.995471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:12.995492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:12.995501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.007348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.007371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.007380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.019307] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.019329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.019337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.031329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.031351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.031360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.043304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.043326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.043334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.055147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.055168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.055176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.066977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.066998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.067007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.078977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.078997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.079007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.090819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.090842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.090852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.102726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.102746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.102754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.114997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.115017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.115025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.127153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.127174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.127183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.139103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.139122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.139131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.463 [2024-04-27 00:57:13.151418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.463 [2024-04-27 00:57:13.151439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.463 [2024-04-27 00:57:13.151448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.163436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.163456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.163465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.175493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.175514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.175522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.187469] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.187489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.187501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.199374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.199394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.199402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.211410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.211431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.211439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.223324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.223345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.223352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.235240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.235261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.235269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.247092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.247113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.247121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.258964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.258984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.258993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.270915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.270936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.270944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.282847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.282868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.282876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.294729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.294754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.294763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.306575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.306596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.306604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.318571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.318591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.318600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.330628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.330648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.721 [2024-04-27 00:57:13.330656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.721 [2024-04-27 00:57:13.342542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.721 [2024-04-27 00:57:13.342562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.722 [2024-04-27 00:57:13.342570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.722 [2024-04-27 00:57:13.354462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.722 [2024-04-27 00:57:13.354482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.722 [2024-04-27 00:57:13.354490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.722 [2024-04-27 00:57:13.366478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.722 [2024-04-27 00:57:13.366499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.722 [2024-04-27 00:57:13.366507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.722 [2024-04-27 00:57:13.378358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.722 [2024-04-27 00:57:13.378378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.722 [2024-04-27 00:57:13.378387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.722 [2024-04-27 00:57:13.390370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.722 [2024-04-27 00:57:13.390391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.722 [2024-04-27 00:57:13.390399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.722 [2024-04-27 00:57:13.402291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.722 [2024-04-27 00:57:13.402312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.722 [2024-04-27 00:57:13.402320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.722 [2024-04-27 00:57:13.414164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.722 [2024-04-27 00:57:13.414200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.722 [2024-04-27 00:57:13.414209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.426157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.426185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.426194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.438033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.438053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.438061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.449848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.449868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.449876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.461686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.461707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.461715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.473575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.473595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.473604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.485701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.485721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.485730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.497625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.497646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.497658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.509452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.509472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.509481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.521360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.521380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.521389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.533258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.533279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.533288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.545097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.545117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.545126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.981 [2024-04-27 00:57:13.556961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.981 [2024-04-27 00:57:13.556982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.981 [2024-04-27 00:57:13.556991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.982 [2024-04-27 00:57:13.568839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.982 [2024-04-27 00:57:13.568859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.982 [2024-04-27 00:57:13.568868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.982 [2024-04-27 00:57:13.580696] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.982 [2024-04-27 00:57:13.580716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.982 [2024-04-27 00:57:13.580724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.982 [2024-04-27 00:57:13.592821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.982 [2024-04-27 00:57:13.592842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.982 [2024-04-27 00:57:13.592850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.982 [2024-04-27 00:57:13.604658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.982 [2024-04-27 00:57:13.604678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.982 [2024-04-27 00:57:13.604686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.982 [2024-04-27 00:57:13.616518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.982 [2024-04-27 00:57:13.616538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.982 [2024-04-27 00:57:13.616546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.982 [2024-04-27 00:57:13.628378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.982 [2024-04-27 00:57:13.628398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.982 [2024-04-27 00:57:13.628406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.982 [2024-04-27 00:57:13.640311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.982 [2024-04-27 00:57:13.640332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.982 [2024-04-27 00:57:13.640340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.982 [2024-04-27 00:57:13.652168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.982 [2024-04-27 00:57:13.652189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.982 [2024-04-27 00:57:13.652197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.982 [2024-04-27 00:57:13.664246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:20.982 [2024-04-27 00:57:13.664265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.982 [2024-04-27 00:57:13.664274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.677065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.677091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.677100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.690329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.690349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.690358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.702328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.702348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.702360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.715198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.715218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.715227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.727166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.727187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.727195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.739188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.739209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.739217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.751430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.751451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.751460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.764314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.764334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.764343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.784865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.784884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.784892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.806609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.806629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.806637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.819645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.819666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.819674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.834185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.834210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.834218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.849161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.849182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.849190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.861416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.861437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.861445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.873718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.873739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.873748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.886408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.886428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.886436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.898509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.898530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.898538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.911323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.911344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.911353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.242 [2024-04-27 00:57:13.925535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.242 [2024-04-27 00:57:13.925556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.242 [2024-04-27 00:57:13.925564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:13.939615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:13.939636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:13.939644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:13.960267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:13.960289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:13.960297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:13.977557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:13.977578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:13.977586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:13.992904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:13.992925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:13.992934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.007775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.007804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.007813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.028916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.028936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.028944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.042205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.042226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.042234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.060526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.060547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.060554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.076570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.076590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.076597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.091115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.091136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.091148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.104426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.104448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.104456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.120405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.120425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.120433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.136415] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.136438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.136447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.152698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.152720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.152728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.166970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.166991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.166999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.181002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.181023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.181031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.502 [2024-04-27 00:57:14.194973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.502 [2024-04-27 00:57:14.194994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.502 [2024-04-27 00:57:14.195003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.211098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.211119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.211127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.233807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.233828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.233836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.253457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.253477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.253484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.269257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.269278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.269286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.282916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.282936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.282944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.295491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.295512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.295521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.311909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.311929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.311937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.327575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.327595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.327603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.340664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.340684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.340692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.352800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.352822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.352834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.365822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.365842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.365852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.378175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.378194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.378203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.390985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.391005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.391014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.403084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.403105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.403113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.415019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.415040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.415048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.426950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.426971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.426979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.438877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.438898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.438906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:21.761 [2024-04-27 00:57:14.450892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:21.761 [2024-04-27 00:57:14.450913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:21.761 [2024-04-27 00:57:14.450922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.462981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.463006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.463014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.475357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.475378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.475386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.487359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.487381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.487389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.499573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.499594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.499604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.511598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.511619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.511628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.523740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.523760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.523768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.535990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.536010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.536019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.548127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.548150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.548158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.560394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.560415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.560423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.572509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.572529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.572539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.584491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.584511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.584518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.596494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.596516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.596524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.608523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.608544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.608553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.620524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.620544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.021 [2024-04-27 00:57:14.620554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.021 [2024-04-27 00:57:14.632503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.021 [2024-04-27 00:57:14.632524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.022 [2024-04-27 00:57:14.632533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:22.022 [2024-04-27 00:57:14.644721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.022 [2024-04-27 00:57:14.644741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.022 [2024-04-27 00:57:14.644750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:22.022 [2024-04-27 00:57:14.656706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.022 [2024-04-27 00:57:14.656727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.022 [2024-04-27 00:57:14.656735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:22.022 [2024-04-27 00:57:14.668940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1bab170) 00:23:22.022 [2024-04-27 00:57:14.668961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.022 [2024-04-27 00:57:14.668976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:22.022 00:23:22.022 Latency(us) 00:23:22.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.022 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:22.022 nvme0n1 : 2.00 2381.79 297.72 0.00 0.00 6712.87 5869.75 25302.59 00:23:22.022 =================================================================================================================== 00:23:22.022 Total : 2381.79 297.72 0.00 0.00 6712.87 5869.75 25302.59 00:23:22.022 0 00:23:22.022 00:57:14 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:22.022 00:57:14 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:22.022 00:57:14 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:22.022 | .driver_specific 00:23:22.022 | .nvme_error 00:23:22.022 | .status_code 00:23:22.022 | .command_transient_transport_error' 00:23:22.022 00:57:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:22.280 00:57:14 -- host/digest.sh@71 -- # (( 154 > 0 )) 00:23:22.280 00:57:14 -- host/digest.sh@73 -- # killprocess 1803670 00:23:22.280 00:57:14 -- common/autotest_common.sh@936 -- # '[' -z 1803670 ']' 00:23:22.280 00:57:14 -- common/autotest_common.sh@940 -- # kill -0 1803670 00:23:22.280 00:57:14 -- common/autotest_common.sh@941 -- # uname 00:23:22.280 00:57:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:22.280 00:57:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1803670 00:23:22.280 00:57:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:22.280 00:57:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:22.280 00:57:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1803670' 00:23:22.280 killing process with pid 1803670 00:23:22.280 00:57:14 -- common/autotest_common.sh@955 -- # kill 1803670 00:23:22.280 Received shutdown signal, test time was about 2.000000 seconds 00:23:22.280 00:23:22.280 Latency(us) 00:23:22.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.280 =================================================================================================================== 00:23:22.280 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.280 00:57:14 -- common/autotest_common.sh@960 -- # wait 1803670 00:23:22.539 00:57:15 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:22.539 00:57:15 -- host/digest.sh@54 -- # local rw bs qd 00:23:22.539 00:57:15 -- host/digest.sh@56 -- # rw=randwrite 00:23:22.539 00:57:15 -- host/digest.sh@56 -- # bs=4096 00:23:22.539 00:57:15 -- host/digest.sh@56 -- # qd=128 00:23:22.539 00:57:15 -- host/digest.sh@58 -- # bperfpid=1804369 00:23:22.539 00:57:15 -- host/digest.sh@60 -- # waitforlisten 1804369 /var/tmp/bperf.sock 00:23:22.539 00:57:15 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:22.539 00:57:15 -- common/autotest_common.sh@817 -- # '[' -z 1804369 ']' 00:23:22.539 00:57:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:22.539 00:57:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:22.539 00:57:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:22.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:22.539 00:57:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:22.539 00:57:15 -- common/autotest_common.sh@10 -- # set +x 00:23:22.539 [2024-04-27 00:57:15.157080] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:22.539 [2024-04-27 00:57:15.157126] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1804369 ] 00:23:22.539 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.539 [2024-04-27 00:57:15.211145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.797 [2024-04-27 00:57:15.289523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.364 00:57:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:23.364 00:57:15 -- common/autotest_common.sh@850 -- # return 0 00:23:23.364 00:57:15 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:23.364 00:57:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:23.621 00:57:16 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:23.621 00:57:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.621 00:57:16 -- common/autotest_common.sh@10 -- # set +x 00:23:23.621 00:57:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.621 00:57:16 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:23.621 00:57:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:23.879 nvme0n1 00:23:23.879 00:57:16 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:23.879 00:57:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.879 00:57:16 -- common/autotest_common.sh@10 -- # set +x 00:23:23.879 00:57:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.879 00:57:16 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:23.880 00:57:16 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:24.139 Running I/O for 2 seconds... 00:23:24.139 [2024-04-27 00:57:16.646471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fe720 00:23:24.139 [2024-04-27 00:57:16.647362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.647392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.657371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f2510 00:23:24.139 [2024-04-27 00:57:16.658884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.658908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.665827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.139 [2024-04-27 00:57:16.666599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.666619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.675360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.139 [2024-04-27 00:57:16.676151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.676170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.684791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.139 [2024-04-27 00:57:16.685611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.685634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.694315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.139 [2024-04-27 00:57:16.695111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.695129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.703719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.139 [2024-04-27 00:57:16.704527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.704546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.713131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.139 [2024-04-27 00:57:16.713939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.713958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.722611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.139 [2024-04-27 00:57:16.723408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.723427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.732013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.139 [2024-04-27 00:57:16.732806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.732825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.741437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.139 [2024-04-27 00:57:16.742229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.742247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.750872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.139 [2024-04-27 00:57:16.751672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.751691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.760296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.139 [2024-04-27 00:57:16.761094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.761113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.769723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.139 [2024-04-27 00:57:16.770539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-04-27 00:57:16.770558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.139 [2024-04-27 00:57:16.779136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.140 [2024-04-27 00:57:16.780147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-04-27 00:57:16.780166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.140 [2024-04-27 00:57:16.788631] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.140 [2024-04-27 00:57:16.789448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-04-27 00:57:16.789468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.140 [2024-04-27 00:57:16.798254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.140 [2024-04-27 00:57:16.799051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-04-27 00:57:16.799076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.140 [2024-04-27 00:57:16.807792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.140 [2024-04-27 00:57:16.808596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-04-27 00:57:16.808614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.140 [2024-04-27 00:57:16.817240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.140 [2024-04-27 00:57:16.818041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-04-27 00:57:16.818059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.140 [2024-04-27 00:57:16.826664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.140 [2024-04-27 00:57:16.827461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-04-27 00:57:16.827479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.836525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.399 [2024-04-27 00:57:16.837360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.837379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.846079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.399 [2024-04-27 00:57:16.846893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.846912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.855478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.399 [2024-04-27 00:57:16.856291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.856310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.864914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.399 [2024-04-27 00:57:16.865720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.865739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.874368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.399 [2024-04-27 00:57:16.875159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.875177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.883811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.399 [2024-04-27 00:57:16.884602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.884620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.893201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.399 [2024-04-27 00:57:16.893998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.894017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.902633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.399 [2024-04-27 00:57:16.903445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.903464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.912154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.399 [2024-04-27 00:57:16.912940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.912958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.921532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.399 [2024-04-27 00:57:16.922330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.922349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.930942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.399 [2024-04-27 00:57:16.931750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.931774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.940329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.399 [2024-04-27 00:57:16.941128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.941147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.949738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.399 [2024-04-27 00:57:16.950537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.950556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.959106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.399 [2024-04-27 00:57:16.959897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.959916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.968509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.399 [2024-04-27 00:57:16.969315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.969334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.977924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.399 [2024-04-27 00:57:16.978730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.978748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.987282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.399 [2024-04-27 00:57:16.988092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.988111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:16.996696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.399 [2024-04-27 00:57:16.997510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:16.997528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:17.006123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.399 [2024-04-27 00:57:17.006914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.399 [2024-04-27 00:57:17.006933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.399 [2024-04-27 00:57:17.015477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.400 [2024-04-27 00:57:17.016281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.400 [2024-04-27 00:57:17.016300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.400 [2024-04-27 00:57:17.024892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.400 [2024-04-27 00:57:17.025690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.400 [2024-04-27 00:57:17.025709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.400 [2024-04-27 00:57:17.034300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.400 [2024-04-27 00:57:17.035092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.400 [2024-04-27 00:57:17.035111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.400 [2024-04-27 00:57:17.043647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.400 [2024-04-27 00:57:17.044451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.400 [2024-04-27 00:57:17.044470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.400 [2024-04-27 00:57:17.053320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.400 [2024-04-27 00:57:17.054130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.400 [2024-04-27 00:57:17.054148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.400 [2024-04-27 00:57:17.062696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.400 [2024-04-27 00:57:17.063499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.400 [2024-04-27 00:57:17.063518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.400 [2024-04-27 00:57:17.072136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.400 [2024-04-27 00:57:17.072936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.400 [2024-04-27 00:57:17.072955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.400 [2024-04-27 00:57:17.081574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.400 [2024-04-27 00:57:17.082375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.400 [2024-04-27 00:57:17.082393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.400 [2024-04-27 00:57:17.091061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.400 [2024-04-27 00:57:17.091905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.400 [2024-04-27 00:57:17.091924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.100979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.659 [2024-04-27 00:57:17.101804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.101823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.110404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.659 [2024-04-27 00:57:17.111214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.111233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.119781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.659 [2024-04-27 00:57:17.120593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.120612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.129181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.659 [2024-04-27 00:57:17.129969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.129988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.138547] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.659 [2024-04-27 00:57:17.139355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.139373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.147924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.659 [2024-04-27 00:57:17.148725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.148743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.157385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.659 [2024-04-27 00:57:17.158179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.158197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.166916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.659 [2024-04-27 00:57:17.167718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.167737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.176391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.659 [2024-04-27 00:57:17.177195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.177217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.185778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.659 [2024-04-27 00:57:17.186578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.186596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.195196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.659 [2024-04-27 00:57:17.195989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.196007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.204606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.659 [2024-04-27 00:57:17.205402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.205421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.213998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.659 [2024-04-27 00:57:17.214988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.215007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.223561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.659 [2024-04-27 00:57:17.224368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.224386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.233040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.659 [2024-04-27 00:57:17.233832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.233852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.242414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.659 [2024-04-27 00:57:17.243205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.243223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.251789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.659 [2024-04-27 00:57:17.252589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.659 [2024-04-27 00:57:17.252607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.659 [2024-04-27 00:57:17.261228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.659 [2024-04-27 00:57:17.262039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.660 [2024-04-27 00:57:17.262057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.660 [2024-04-27 00:57:17.270604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.660 [2024-04-27 00:57:17.271404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.660 [2024-04-27 00:57:17.271422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.660 [2024-04-27 00:57:17.280084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.660 [2024-04-27 00:57:17.280884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.660 [2024-04-27 00:57:17.280902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.660 [2024-04-27 00:57:17.289772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.660 [2024-04-27 00:57:17.290625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.660 [2024-04-27 00:57:17.290644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.660 [2024-04-27 00:57:17.299268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.660 [2024-04-27 00:57:17.300090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.660 [2024-04-27 00:57:17.300109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.660 [2024-04-27 00:57:17.308697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.660 [2024-04-27 00:57:17.309497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.660 [2024-04-27 00:57:17.309515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.660 [2024-04-27 00:57:17.318081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.660 [2024-04-27 00:57:17.318883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.660 [2024-04-27 00:57:17.318901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.660 [2024-04-27 00:57:17.327480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.660 [2024-04-27 00:57:17.328272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.660 [2024-04-27 00:57:17.328290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.660 [2024-04-27 00:57:17.336923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.660 [2024-04-27 00:57:17.337727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.660 [2024-04-27 00:57:17.337745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.660 [2024-04-27 00:57:17.346340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.660 [2024-04-27 00:57:17.347147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.660 [2024-04-27 00:57:17.347166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.929 [2024-04-27 00:57:17.356194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.929 [2024-04-27 00:57:17.357029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.929 [2024-04-27 00:57:17.357049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.929 [2024-04-27 00:57:17.365793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.929 [2024-04-27 00:57:17.366604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.929 [2024-04-27 00:57:17.366624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.929 [2024-04-27 00:57:17.375173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.929 [2024-04-27 00:57:17.376050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.929 [2024-04-27 00:57:17.376068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.929 [2024-04-27 00:57:17.384617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.929 [2024-04-27 00:57:17.385407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.929 [2024-04-27 00:57:17.385426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.929 [2024-04-27 00:57:17.394029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.929 [2024-04-27 00:57:17.394819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.929 [2024-04-27 00:57:17.394838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.929 [2024-04-27 00:57:17.403389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.929 [2024-04-27 00:57:17.404197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.404216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.412851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.413673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.413692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.422511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.423340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.423362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.432016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.432822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.432841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.441490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.442336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.442355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.450890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.451680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.451699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.460343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.461150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.461169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.469746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.470548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.470567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.479136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.479940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.479958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.488642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.489456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.489475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.498110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.498902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.498921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.507491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.508289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.508311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.516941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.517756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.517775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.526335] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.527136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.527154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.535756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.536569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.536587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.545236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.546035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.546054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.554617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.555428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.555447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.564103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.564923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.564942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.573491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.574289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.574307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.582883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.583682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.583702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.592430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.593235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.593254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.601808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.602626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.602644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.611236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:24.930 [2024-04-27 00:57:17.612045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.612063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.930 [2024-04-27 00:57:17.620906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:24.930 [2024-04-27 00:57:17.621766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.930 [2024-04-27 00:57:17.621786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.630706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:25.190 [2024-04-27 00:57:17.631557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.631577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.640186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.190 [2024-04-27 00:57:17.640983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.641002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.649573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:25.190 [2024-04-27 00:57:17.650370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.650388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.658942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.190 [2024-04-27 00:57:17.659746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.659765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.668392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:25.190 [2024-04-27 00:57:17.669195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.669213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.677916] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.190 [2024-04-27 00:57:17.678719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.678737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.687307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:25.190 [2024-04-27 00:57:17.688099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.688117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.696801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.190 [2024-04-27 00:57:17.697606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.697626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.706188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:25.190 [2024-04-27 00:57:17.706981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.706999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.715593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.190 [2024-04-27 00:57:17.716400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.716418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.724986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:25.190 [2024-04-27 00:57:17.725787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.725805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.734354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.190 [2024-04-27 00:57:17.735148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.735167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.743771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f7970 00:23:25.190 [2024-04-27 00:57:17.744557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.744575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.753159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.190 [2024-04-27 00:57:17.753931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.190 [2024-04-27 00:57:17.753953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:25.190 [2024-04-27 00:57:17.762248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f2d80 00:23:25.190 [2024-04-27 00:57:17.764996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.765015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.776449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f2d80 00:23:25.191 [2024-04-27 00:57:17.777652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.777671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.786636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.786899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.786918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.796510] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.796748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.796767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.806336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.806584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.806603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.816087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.816346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.816365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.825906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.826160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.826180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.835705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.835937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.835956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.845485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.845732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.845751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.855351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.855609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.855627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.865066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.865666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.865685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.874927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.875312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.875330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.191 [2024-04-27 00:57:17.884981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.191 [2024-04-27 00:57:17.885235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.191 [2024-04-27 00:57:17.885254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.895208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.449 [2024-04-27 00:57:17.895457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.895475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.905063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.449 [2024-04-27 00:57:17.905316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.905334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.914887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.449 [2024-04-27 00:57:17.915123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.915142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.924643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.449 [2024-04-27 00:57:17.925122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.925140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.934601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.449 [2024-04-27 00:57:17.934882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.934900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.944276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fcdd0 00:23:25.449 [2024-04-27 00:57:17.946241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.946259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.957881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3a28 00:23:25.449 [2024-04-27 00:57:17.959171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.959189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.967785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.449 [2024-04-27 00:57:17.967991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.968011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.977558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.449 [2024-04-27 00:57:17.977841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.977860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.987380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.449 [2024-04-27 00:57:17.987603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.987621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:17.997142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.449 [2024-04-27 00:57:17.997348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:17.997367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:18.006922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.449 [2024-04-27 00:57:18.007109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:18.007127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:18.016725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.449 [2024-04-27 00:57:18.017119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:18.017141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:18.026600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.449 [2024-04-27 00:57:18.026784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:18.026802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:18.036388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.449 [2024-04-27 00:57:18.036731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:18.036750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:18.046219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.449 [2024-04-27 00:57:18.046403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.449 [2024-04-27 00:57:18.046420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.449 [2024-04-27 00:57:18.055968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.449 [2024-04-27 00:57:18.056148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.450 [2024-04-27 00:57:18.056166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.450 [2024-04-27 00:57:18.065707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.450 [2024-04-27 00:57:18.066545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.450 [2024-04-27 00:57:18.066564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.450 [2024-04-27 00:57:18.075481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.450 [2024-04-27 00:57:18.076229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.450 [2024-04-27 00:57:18.076248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.450 [2024-04-27 00:57:18.085271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.450 [2024-04-27 00:57:18.085630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.450 [2024-04-27 00:57:18.085648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.450 [2024-04-27 00:57:18.095090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.450 [2024-04-27 00:57:18.095274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.450 [2024-04-27 00:57:18.095291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.450 [2024-04-27 00:57:18.104790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f3e60 00:23:25.450 [2024-04-27 00:57:18.105685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.450 [2024-04-27 00:57:18.105704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:25.450 [2024-04-27 00:57:18.115217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190fc560 00:23:25.450 [2024-04-27 00:57:18.116208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.450 [2024-04-27 00:57:18.116227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:25.450 [2024-04-27 00:57:18.125309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f35f0 00:23:25.450 [2024-04-27 00:57:18.125527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.450 [2024-04-27 00:57:18.125546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.450 [2024-04-27 00:57:18.135056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f35f0 00:23:25.450 [2024-04-27 00:57:18.135543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.450 [2024-04-27 00:57:18.135562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.708 [2024-04-27 00:57:18.145322] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f35f0 00:23:25.708 [2024-04-27 00:57:18.145705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.708 [2024-04-27 00:57:18.145724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.708 [2024-04-27 00:57:18.155258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f35f0 00:23:25.708 [2024-04-27 00:57:18.156881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.708 [2024-04-27 00:57:18.156899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.708 [2024-04-27 00:57:18.167238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f4298 00:23:25.708 [2024-04-27 00:57:18.168220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.168239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.177038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.709 [2024-04-27 00:57:18.177287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.177306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.186904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.709 [2024-04-27 00:57:18.187374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.187393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.196707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.709 [2024-04-27 00:57:18.197178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.197197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.206566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.709 [2024-04-27 00:57:18.206833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.206851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.216512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.709 [2024-04-27 00:57:18.217149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.217168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.226286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f31b8 00:23:25.709 [2024-04-27 00:57:18.227232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.227251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.236976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f4f40 00:23:25.709 [2024-04-27 00:57:18.238215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.238234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.246336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e2c28 00:23:25.709 [2024-04-27 00:57:18.247952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.247970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.255571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f6cc8 00:23:25.709 [2024-04-27 00:57:18.256455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.256474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.265379] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f6cc8 00:23:25.709 [2024-04-27 00:57:18.265588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.265607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.275094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f6cc8 00:23:25.709 [2024-04-27 00:57:18.275557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.275578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.284945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f6cc8 00:23:25.709 [2024-04-27 00:57:18.285341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.285359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.294661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f6cc8 00:23:25.709 [2024-04-27 00:57:18.295182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.295201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.304367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190f6cc8 00:23:25.709 [2024-04-27 00:57:18.305369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.305387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.313994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e7818 00:23:25.709 [2024-04-27 00:57:18.314375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.314395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.323855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e7818 00:23:25.709 [2024-04-27 00:57:18.324208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.324227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.333725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e7818 00:23:25.709 [2024-04-27 00:57:18.334432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.334451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.343421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e7818 00:23:25.709 [2024-04-27 00:57:18.344924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.344943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.354473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e6fa8 00:23:25.709 [2024-04-27 00:57:18.355348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.355367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.364463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e6b70 00:23:25.709 [2024-04-27 00:57:18.364685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.364703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.374182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e6b70 00:23:25.709 [2024-04-27 00:57:18.374667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.374686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.383950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e6b70 00:23:25.709 [2024-04-27 00:57:18.384710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.384728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.709 [2024-04-27 00:57:18.393813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e6b70 00:23:25.709 [2024-04-27 00:57:18.394082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.709 [2024-04-27 00:57:18.394100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.968 [2024-04-27 00:57:18.404052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e6b70 00:23:25.969 [2024-04-27 00:57:18.404301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.404320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.414029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e6b70 00:23:25.969 [2024-04-27 00:57:18.414580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.414598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.423908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e6b70 00:23:25.969 [2024-04-27 00:57:18.424119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.424137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.434833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e88f8 00:23:25.969 [2024-04-27 00:57:18.435617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.435635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.444966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e49b0 00:23:25.969 [2024-04-27 00:57:18.445464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.445482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.454727] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e49b0 00:23:25.969 [2024-04-27 00:57:18.455084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.455103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.464465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e49b0 00:23:25.969 [2024-04-27 00:57:18.466049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.466067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.474497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.474733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.474752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.484236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.484914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.484933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.494010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.494996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.495015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.503847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.504087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.504105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.513621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.514170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.514189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.523434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.523825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.523843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.533249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.533484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.533507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.543009] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.543254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.543273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.552894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.553089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.553106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.562645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.562834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.562851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.572426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.572617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.572634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.582228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.582419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.582437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.591881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e4140 00:23:25.969 [2024-04-27 00:57:18.593903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.593921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.605699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e49b0 00:23:25.969 [2024-04-27 00:57:18.607013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.607031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:25.969 [2024-04-27 00:57:18.615807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124af40) with pdu=0x2000190e12d8 00:23:25.969 [2024-04-27 00:57:18.617189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:25.969 [2024-04-27 00:57:18.617207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:25.969 00:23:25.969 Latency(us) 00:23:25.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:25.969 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:25.969 nvme0n1 : 2.00 26074.44 101.85 0.00 0.00 4901.07 2393.49 25302.59 00:23:25.969 =================================================================================================================== 00:23:25.969 Total : 26074.44 101.85 0.00 0.00 4901.07 2393.49 25302.59 00:23:25.969 0 00:23:25.969 00:57:18 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:25.969 00:57:18 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:25.969 00:57:18 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:25.969 | .driver_specific 00:23:25.969 | .nvme_error 00:23:25.969 | .status_code 00:23:25.969 | .command_transient_transport_error' 00:23:25.969 00:57:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:26.228 00:57:18 -- host/digest.sh@71 -- # (( 204 > 0 )) 00:23:26.228 00:57:18 -- host/digest.sh@73 -- # killprocess 1804369 00:23:26.228 00:57:18 -- common/autotest_common.sh@936 -- # '[' -z 1804369 ']' 00:23:26.228 00:57:18 -- common/autotest_common.sh@940 -- # kill -0 1804369 00:23:26.228 00:57:18 -- common/autotest_common.sh@941 -- # uname 00:23:26.228 00:57:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:26.228 00:57:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1804369 00:23:26.228 00:57:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:26.228 00:57:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:26.228 00:57:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1804369' 00:23:26.228 killing process with pid 1804369 00:23:26.228 00:57:18 -- common/autotest_common.sh@955 -- # kill 1804369 00:23:26.228 Received shutdown signal, test time was about 2.000000 seconds 00:23:26.228 00:23:26.228 Latency(us) 00:23:26.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.228 =================================================================================================================== 00:23:26.228 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.228 00:57:18 -- common/autotest_common.sh@960 -- # wait 1804369 00:23:26.486 00:57:19 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:26.487 00:57:19 -- host/digest.sh@54 -- # local rw bs qd 00:23:26.487 00:57:19 -- host/digest.sh@56 -- # rw=randwrite 00:23:26.487 00:57:19 -- host/digest.sh@56 -- # bs=131072 00:23:26.487 00:57:19 -- host/digest.sh@56 -- # qd=16 00:23:26.487 00:57:19 -- host/digest.sh@58 -- # bperfpid=1805066 00:23:26.487 00:57:19 -- host/digest.sh@60 -- # waitforlisten 1805066 /var/tmp/bperf.sock 00:23:26.487 00:57:19 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:26.487 00:57:19 -- common/autotest_common.sh@817 -- # '[' -z 1805066 ']' 00:23:26.487 00:57:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:26.487 00:57:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:26.487 00:57:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:26.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:26.487 00:57:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:26.487 00:57:19 -- common/autotest_common.sh@10 -- # set +x 00:23:26.487 [2024-04-27 00:57:19.128951] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:26.487 [2024-04-27 00:57:19.129001] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1805066 ] 00:23:26.487 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:26.487 Zero copy mechanism will not be used. 00:23:26.487 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.745 [2024-04-27 00:57:19.183206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.745 [2024-04-27 00:57:19.260788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.310 00:57:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:27.310 00:57:19 -- common/autotest_common.sh@850 -- # return 0 00:23:27.310 00:57:19 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:27.310 00:57:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:27.568 00:57:20 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:27.568 00:57:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.568 00:57:20 -- common/autotest_common.sh@10 -- # set +x 00:23:27.568 00:57:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.568 00:57:20 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:27.568 00:57:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:27.826 nvme0n1 00:23:27.826 00:57:20 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:27.826 00:57:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.826 00:57:20 -- common/autotest_common.sh@10 -- # set +x 00:23:27.826 00:57:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.826 00:57:20 -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:27.826 00:57:20 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:27.826 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:27.826 Zero copy mechanism will not be used. 00:23:27.826 Running I/O for 2 seconds... 00:23:27.826 [2024-04-27 00:57:20.514434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:27.826 [2024-04-27 00:57:20.515058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:27.826 [2024-04-27 00:57:20.515092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.529757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.530208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.530231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.544522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.544970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.544992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.559236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.559506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.559526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.575786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.576243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.576264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.591416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.591946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.591966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.608823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.609279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.609300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.625816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.626343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.626362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.642919] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.643513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.643534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.668951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.669700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.669720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.687705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.688174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.688193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.706802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.707380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.707399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.724850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.725373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.725392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.742509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.743257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.743280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.085 [2024-04-27 00:57:20.762412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.085 [2024-04-27 00:57:20.762999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.085 [2024-04-27 00:57:20.763019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.344 [2024-04-27 00:57:20.781184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.344 [2024-04-27 00:57:20.781765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.344 [2024-04-27 00:57:20.781785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.344 [2024-04-27 00:57:20.799154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.344 [2024-04-27 00:57:20.799942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.344 [2024-04-27 00:57:20.799961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.344 [2024-04-27 00:57:20.816361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.344 [2024-04-27 00:57:20.817046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.344 [2024-04-27 00:57:20.817067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.344 [2024-04-27 00:57:20.835533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.344 [2024-04-27 00:57:20.836076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.344 [2024-04-27 00:57:20.836096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.344 [2024-04-27 00:57:20.852434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.344 [2024-04-27 00:57:20.852904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.344 [2024-04-27 00:57:20.852924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.344 [2024-04-27 00:57:20.871065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.344 [2024-04-27 00:57:20.871802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.344 [2024-04-27 00:57:20.871821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.344 [2024-04-27 00:57:20.890084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.344 [2024-04-27 00:57:20.890731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.344 [2024-04-27 00:57:20.890750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.344 [2024-04-27 00:57:20.907872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.345 [2024-04-27 00:57:20.908558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.345 [2024-04-27 00:57:20.908577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.345 [2024-04-27 00:57:20.925008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.345 [2024-04-27 00:57:20.925659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.345 [2024-04-27 00:57:20.925680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.345 [2024-04-27 00:57:20.942138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.345 [2024-04-27 00:57:20.942614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.345 [2024-04-27 00:57:20.942634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.345 [2024-04-27 00:57:20.959241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.345 [2024-04-27 00:57:20.959655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.345 [2024-04-27 00:57:20.959674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.345 [2024-04-27 00:57:20.976515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.345 [2024-04-27 00:57:20.976991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.345 [2024-04-27 00:57:20.977010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.345 [2024-04-27 00:57:20.993234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.345 [2024-04-27 00:57:20.993756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.345 [2024-04-27 00:57:20.993774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.345 [2024-04-27 00:57:21.009540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.345 [2024-04-27 00:57:21.009987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.345 [2024-04-27 00:57:21.010006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.345 [2024-04-27 00:57:21.027619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.345 [2024-04-27 00:57:21.028214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.345 [2024-04-27 00:57:21.028235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.603 [2024-04-27 00:57:21.045007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.603 [2024-04-27 00:57:21.045542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.603 [2024-04-27 00:57:21.045566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.603 [2024-04-27 00:57:21.063188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.603 [2024-04-27 00:57:21.063873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.603 [2024-04-27 00:57:21.063892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.603 [2024-04-27 00:57:21.081314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.603 [2024-04-27 00:57:21.081861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.603 [2024-04-27 00:57:21.081881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.603 [2024-04-27 00:57:21.098694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.603 [2024-04-27 00:57:21.099234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.603 [2024-04-27 00:57:21.099255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.603 [2024-04-27 00:57:21.117272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.117842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.117862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.604 [2024-04-27 00:57:21.134997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.135624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.135644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.604 [2024-04-27 00:57:21.153823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.154301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.154321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.604 [2024-04-27 00:57:21.171822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.172274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.172293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.604 [2024-04-27 00:57:21.190464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.190929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.190948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.604 [2024-04-27 00:57:21.208879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.209330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.209349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.604 [2024-04-27 00:57:21.226850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.227407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.227426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.604 [2024-04-27 00:57:21.245518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.246040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.246059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.604 [2024-04-27 00:57:21.263466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.264133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.264152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.604 [2024-04-27 00:57:21.281969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.282702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.282721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.604 [2024-04-27 00:57:21.298442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.604 [2024-04-27 00:57:21.298951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.604 [2024-04-27 00:57:21.298971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.315660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.316134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.316153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.333531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.334058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.334082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.352236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.352699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.352718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.370904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.371490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.371509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.389663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.390343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.390362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.407705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.408185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.408204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.424669] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.425266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.425286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.443881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.444527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.444547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.462201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.462849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.462868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.479722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.480270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.480290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.497639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.498259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.498278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.515701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.516376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.516400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.535516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.535998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.536017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:28.863 [2024-04-27 00:57:21.553454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:28.863 [2024-04-27 00:57:21.554021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.863 [2024-04-27 00:57:21.554042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.572891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.573237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.573256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.591664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.592265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.592285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.609392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.610122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.610142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.629502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.630159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.630179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.648603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.649217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.649238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.665126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.665459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.665478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.683681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.684332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.684351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.703724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.704332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.704351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.721810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.722379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.722399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.740505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.740843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.740862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.758232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.758900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.758920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.777150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.122 [2024-04-27 00:57:21.777933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.122 [2024-04-27 00:57:21.777953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.122 [2024-04-27 00:57:21.796296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.123 [2024-04-27 00:57:21.797060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.123 [2024-04-27 00:57:21.797086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.123 [2024-04-27 00:57:21.815301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.123 [2024-04-27 00:57:21.815771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.123 [2024-04-27 00:57:21.815791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.381 [2024-04-27 00:57:21.834031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.381 [2024-04-27 00:57:21.834370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.381 [2024-04-27 00:57:21.834390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.381 [2024-04-27 00:57:21.854488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.381 [2024-04-27 00:57:21.855057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.381 [2024-04-27 00:57:21.855081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.381 [2024-04-27 00:57:21.873805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.381 [2024-04-27 00:57:21.874290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:21.874310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.382 [2024-04-27 00:57:21.891998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.382 [2024-04-27 00:57:21.892542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:21.892561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.382 [2024-04-27 00:57:21.911874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.382 [2024-04-27 00:57:21.912417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:21.912435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.382 [2024-04-27 00:57:21.930980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.382 [2024-04-27 00:57:21.931846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:21.931865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.382 [2024-04-27 00:57:21.951127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.382 [2024-04-27 00:57:21.951761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:21.951780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.382 [2024-04-27 00:57:21.970874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.382 [2024-04-27 00:57:21.971485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:21.971507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.382 [2024-04-27 00:57:21.991438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.382 [2024-04-27 00:57:21.992121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:21.992141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.382 [2024-04-27 00:57:22.010645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.382 [2024-04-27 00:57:22.011201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:22.011224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.382 [2024-04-27 00:57:22.029991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.382 [2024-04-27 00:57:22.030603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:22.030622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.382 [2024-04-27 00:57:22.049032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.382 [2024-04-27 00:57:22.049507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:22.049527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.382 [2024-04-27 00:57:22.076229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.382 [2024-04-27 00:57:22.077068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.382 [2024-04-27 00:57:22.077093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.641 [2024-04-27 00:57:22.104674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.641 [2024-04-27 00:57:22.105501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.641 [2024-04-27 00:57:22.105520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.641 [2024-04-27 00:57:22.124803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.641 [2024-04-27 00:57:22.125076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.641 [2024-04-27 00:57:22.125095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.641 [2024-04-27 00:57:22.143478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.641 [2024-04-27 00:57:22.144054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.641 [2024-04-27 00:57:22.144078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.641 [2024-04-27 00:57:22.161161] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.641 [2024-04-27 00:57:22.161763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.641 [2024-04-27 00:57:22.161781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.641 [2024-04-27 00:57:22.178934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.641 [2024-04-27 00:57:22.179477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.641 [2024-04-27 00:57:22.179497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.642 [2024-04-27 00:57:22.199738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.642 [2024-04-27 00:57:22.200345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.642 [2024-04-27 00:57:22.200364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.642 [2024-04-27 00:57:22.217926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.642 [2024-04-27 00:57:22.218262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.642 [2024-04-27 00:57:22.218281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.642 [2024-04-27 00:57:22.236024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.642 [2024-04-27 00:57:22.236702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.642 [2024-04-27 00:57:22.236722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.642 [2024-04-27 00:57:22.253461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.642 [2024-04-27 00:57:22.254026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.642 [2024-04-27 00:57:22.254045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.642 [2024-04-27 00:57:22.270911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.642 [2024-04-27 00:57:22.271580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.642 [2024-04-27 00:57:22.271601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.642 [2024-04-27 00:57:22.296591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.642 [2024-04-27 00:57:22.297324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.642 [2024-04-27 00:57:22.297343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.642 [2024-04-27 00:57:22.314946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.642 [2024-04-27 00:57:22.315471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.642 [2024-04-27 00:57:22.315489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.642 [2024-04-27 00:57:22.333815] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.642 [2024-04-27 00:57:22.334312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.642 [2024-04-27 00:57:22.334332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.901 [2024-04-27 00:57:22.357092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.901 [2024-04-27 00:57:22.357604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.901 [2024-04-27 00:57:22.357624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.901 [2024-04-27 00:57:22.377452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.901 [2024-04-27 00:57:22.378264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.901 [2024-04-27 00:57:22.378284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.901 [2024-04-27 00:57:22.397969] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.901 [2024-04-27 00:57:22.398805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.901 [2024-04-27 00:57:22.398824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.901 [2024-04-27 00:57:22.416407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.901 [2024-04-27 00:57:22.416880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.901 [2024-04-27 00:57:22.416898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:29.901 [2024-04-27 00:57:22.434714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.901 [2024-04-27 00:57:22.435062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.901 [2024-04-27 00:57:22.435085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:29.901 [2024-04-27 00:57:22.452525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.901 [2024-04-27 00:57:22.453193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.901 [2024-04-27 00:57:22.453213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.901 [2024-04-27 00:57:22.479806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x124b280) with pdu=0x2000190fef90 00:23:29.901 [2024-04-27 00:57:22.480480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.901 [2024-04-27 00:57:22.480499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:29.901 00:23:29.901 Latency(us) 00:23:29.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.901 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:29.901 nvme0n1 : 2.01 1642.40 205.30 0.00 0.00 9717.64 6696.07 30317.52 00:23:29.901 =================================================================================================================== 00:23:29.901 Total : 1642.40 205.30 0.00 0.00 9717.64 6696.07 30317.52 00:23:29.901 0 00:23:29.901 00:57:22 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:29.901 00:57:22 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:29.901 00:57:22 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:29.901 | .driver_specific 00:23:29.901 | .nvme_error 00:23:29.901 | .status_code 00:23:29.901 | .command_transient_transport_error' 00:23:29.901 00:57:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:30.160 00:57:22 -- host/digest.sh@71 -- # (( 106 > 0 )) 00:23:30.160 00:57:22 -- host/digest.sh@73 -- # killprocess 1805066 00:23:30.160 00:57:22 -- common/autotest_common.sh@936 -- # '[' -z 1805066 ']' 00:23:30.160 00:57:22 -- common/autotest_common.sh@940 -- # kill -0 1805066 00:23:30.160 00:57:22 -- common/autotest_common.sh@941 -- # uname 00:23:30.160 00:57:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:30.160 00:57:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1805066 00:23:30.160 00:57:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:30.160 00:57:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:30.160 00:57:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1805066' 00:23:30.160 killing process with pid 1805066 00:23:30.160 00:57:22 -- common/autotest_common.sh@955 -- # kill 1805066 00:23:30.160 Received shutdown signal, test time was about 2.000000 seconds 00:23:30.160 00:23:30.160 Latency(us) 00:23:30.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.160 =================================================================================================================== 00:23:30.160 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.160 00:57:22 -- common/autotest_common.sh@960 -- # wait 1805066 00:23:30.421 00:57:22 -- host/digest.sh@116 -- # killprocess 1802869 00:23:30.421 00:57:22 -- common/autotest_common.sh@936 -- # '[' -z 1802869 ']' 00:23:30.421 00:57:22 -- common/autotest_common.sh@940 -- # kill -0 1802869 00:23:30.421 00:57:22 -- common/autotest_common.sh@941 -- # uname 00:23:30.421 00:57:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:30.421 00:57:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1802869 00:23:30.421 00:57:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:30.422 00:57:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:30.422 00:57:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1802869' 00:23:30.422 killing process with pid 1802869 00:23:30.422 00:57:22 -- common/autotest_common.sh@955 -- # kill 1802869 00:23:30.422 00:57:22 -- common/autotest_common.sh@960 -- # wait 1802869 00:23:30.681 00:23:30.681 real 0m16.845s 00:23:30.681 user 0m33.381s 00:23:30.681 sys 0m3.425s 00:23:30.681 00:57:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:30.681 00:57:23 -- common/autotest_common.sh@10 -- # set +x 00:23:30.681 ************************************ 00:23:30.681 END TEST nvmf_digest_error 00:23:30.681 ************************************ 00:23:30.681 00:57:23 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:23:30.681 00:57:23 -- host/digest.sh@150 -- # nvmftestfini 00:23:30.681 00:57:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:30.681 00:57:23 -- nvmf/common.sh@117 -- # sync 00:23:30.681 00:57:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.681 00:57:23 -- nvmf/common.sh@120 -- # set +e 00:23:30.681 00:57:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.681 00:57:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.681 rmmod nvme_tcp 00:23:30.681 rmmod nvme_fabrics 00:23:30.681 rmmod nvme_keyring 00:23:30.681 00:57:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.681 00:57:23 -- nvmf/common.sh@124 -- # set -e 00:23:30.681 00:57:23 -- nvmf/common.sh@125 -- # return 0 00:23:30.681 00:57:23 -- nvmf/common.sh@478 -- # '[' -n 1802869 ']' 00:23:30.681 00:57:23 -- nvmf/common.sh@479 -- # killprocess 1802869 00:23:30.681 00:57:23 -- common/autotest_common.sh@936 -- # '[' -z 1802869 ']' 00:23:30.681 00:57:23 -- common/autotest_common.sh@940 -- # kill -0 1802869 00:23:30.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1802869) - No such process 00:23:30.681 00:57:23 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1802869 is not found' 00:23:30.681 Process with pid 1802869 is not found 00:23:30.681 00:57:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:30.681 00:57:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:30.682 00:57:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:30.682 00:57:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.682 00:57:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.682 00:57:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.682 00:57:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.682 00:57:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.216 00:57:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:33.216 00:23:33.216 real 0m41.896s 00:23:33.216 user 1m8.930s 00:23:33.216 sys 0m11.083s 00:23:33.216 00:57:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:33.216 00:57:25 -- common/autotest_common.sh@10 -- # set +x 00:23:33.216 ************************************ 00:23:33.216 END TEST nvmf_digest 00:23:33.216 ************************************ 00:23:33.216 00:57:25 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:23:33.216 00:57:25 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:23:33.216 00:57:25 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:23:33.216 00:57:25 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:33.216 00:57:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:33.216 00:57:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:33.216 00:57:25 -- common/autotest_common.sh@10 -- # set +x 00:23:33.216 ************************************ 00:23:33.216 START TEST nvmf_bdevperf 00:23:33.216 ************************************ 00:23:33.216 00:57:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:23:33.216 * Looking for test storage... 00:23:33.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:33.216 00:57:25 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:33.216 00:57:25 -- nvmf/common.sh@7 -- # uname -s 00:23:33.216 00:57:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.216 00:57:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.216 00:57:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.217 00:57:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.217 00:57:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.217 00:57:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.217 00:57:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.217 00:57:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.217 00:57:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.217 00:57:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.217 00:57:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:33.217 00:57:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:33.217 00:57:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.217 00:57:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.217 00:57:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:33.217 00:57:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.217 00:57:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:33.217 00:57:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.217 00:57:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.217 00:57:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.217 00:57:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.217 00:57:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.217 00:57:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.217 00:57:25 -- paths/export.sh@5 -- # export PATH 00:23:33.217 00:57:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.217 00:57:25 -- nvmf/common.sh@47 -- # : 0 00:23:33.217 00:57:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.217 00:57:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.217 00:57:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.217 00:57:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.217 00:57:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.217 00:57:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.217 00:57:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.217 00:57:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.217 00:57:25 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:33.217 00:57:25 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:33.217 00:57:25 -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:33.217 00:57:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:33.217 00:57:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.217 00:57:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:33.217 00:57:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:33.217 00:57:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:33.217 00:57:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.217 00:57:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.217 00:57:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.217 00:57:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:33.217 00:57:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:33.217 00:57:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:33.217 00:57:25 -- common/autotest_common.sh@10 -- # set +x 00:23:38.490 00:57:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:38.490 00:57:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.490 00:57:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.490 00:57:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.490 00:57:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.490 00:57:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.490 00:57:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.490 00:57:30 -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.490 00:57:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.490 00:57:30 -- nvmf/common.sh@296 -- # e810=() 00:23:38.490 00:57:30 -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.490 00:57:30 -- nvmf/common.sh@297 -- # x722=() 00:23:38.490 00:57:30 -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.490 00:57:30 -- nvmf/common.sh@298 -- # mlx=() 00:23:38.490 00:57:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.490 00:57:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.490 00:57:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.490 00:57:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:38.490 00:57:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.490 00:57:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.490 00:57:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:38.490 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:38.490 00:57:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.490 00:57:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:38.490 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:38.490 00:57:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.490 00:57:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.490 00:57:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.490 00:57:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:38.490 00:57:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.490 00:57:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:38.490 Found net devices under 0000:86:00.0: cvl_0_0 00:23:38.490 00:57:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.490 00:57:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.490 00:57:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.490 00:57:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:38.490 00:57:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.490 00:57:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:38.490 Found net devices under 0000:86:00.1: cvl_0_1 00:23:38.490 00:57:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.490 00:57:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:38.490 00:57:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:38.490 00:57:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:38.490 00:57:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.490 00:57:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.490 00:57:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.490 00:57:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.490 00:57:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.490 00:57:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.490 00:57:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.490 00:57:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.490 00:57:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.490 00:57:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.490 00:57:30 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.490 00:57:30 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.490 00:57:30 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.490 00:57:30 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.490 00:57:30 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.490 00:57:30 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.490 00:57:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.490 00:57:30 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.490 00:57:30 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.490 00:57:30 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:23:38.490 00:23:38.490 --- 10.0.0.2 ping statistics --- 00:23:38.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.490 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:23:38.490 00:57:30 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:23:38.490 00:23:38.490 --- 10.0.0.1 ping statistics --- 00:23:38.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.490 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:23:38.490 00:57:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.490 00:57:30 -- nvmf/common.sh@411 -- # return 0 00:23:38.490 00:57:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:38.490 00:57:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.490 00:57:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:38.490 00:57:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.490 00:57:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:38.490 00:57:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:38.490 00:57:30 -- host/bdevperf.sh@25 -- # tgt_init 00:23:38.490 00:57:30 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:38.490 00:57:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:38.490 00:57:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:38.490 00:57:30 -- common/autotest_common.sh@10 -- # set +x 00:23:38.490 00:57:30 -- nvmf/common.sh@470 -- # nvmfpid=1809068 00:23:38.490 00:57:30 -- nvmf/common.sh@471 -- # waitforlisten 1809068 00:23:38.490 00:57:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:38.490 00:57:30 -- common/autotest_common.sh@817 -- # '[' -z 1809068 ']' 00:23:38.490 00:57:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.490 00:57:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:38.490 00:57:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.490 00:57:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:38.490 00:57:30 -- common/autotest_common.sh@10 -- # set +x 00:23:38.490 [2024-04-27 00:57:30.947869] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:38.490 [2024-04-27 00:57:30.947912] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.490 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.490 [2024-04-27 00:57:31.004195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:38.490 [2024-04-27 00:57:31.081691] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.491 [2024-04-27 00:57:31.081725] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.491 [2024-04-27 00:57:31.081732] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.491 [2024-04-27 00:57:31.081738] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.491 [2024-04-27 00:57:31.081744] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.491 [2024-04-27 00:57:31.081841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.491 [2024-04-27 00:57:31.081947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.491 [2024-04-27 00:57:31.081948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.058 00:57:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:39.058 00:57:31 -- common/autotest_common.sh@850 -- # return 0 00:23:39.317 00:57:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:39.317 00:57:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:39.317 00:57:31 -- common/autotest_common.sh@10 -- # set +x 00:23:39.317 00:57:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.317 00:57:31 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.317 00:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.317 00:57:31 -- common/autotest_common.sh@10 -- # set +x 00:23:39.317 [2024-04-27 00:57:31.793700] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.317 00:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.317 00:57:31 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:39.317 00:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.318 00:57:31 -- common/autotest_common.sh@10 -- # set +x 00:23:39.318 Malloc0 00:23:39.318 00:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.318 00:57:31 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.318 00:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.318 00:57:31 -- common/autotest_common.sh@10 -- # set +x 00:23:39.318 00:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.318 00:57:31 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.318 00:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.318 00:57:31 -- common/autotest_common.sh@10 -- # set +x 00:23:39.318 00:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.318 00:57:31 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.318 00:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.318 00:57:31 -- common/autotest_common.sh@10 -- # set +x 00:23:39.318 [2024-04-27 00:57:31.854838] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.318 00:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.318 00:57:31 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:39.318 00:57:31 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:39.318 00:57:31 -- nvmf/common.sh@521 -- # config=() 00:23:39.318 00:57:31 -- nvmf/common.sh@521 -- # local subsystem config 00:23:39.318 00:57:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:39.318 00:57:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:39.318 { 00:23:39.318 "params": { 00:23:39.318 "name": "Nvme$subsystem", 00:23:39.318 "trtype": "$TEST_TRANSPORT", 00:23:39.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.318 "adrfam": "ipv4", 00:23:39.318 "trsvcid": "$NVMF_PORT", 00:23:39.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.318 "hdgst": ${hdgst:-false}, 00:23:39.318 "ddgst": ${ddgst:-false} 00:23:39.318 }, 00:23:39.318 "method": "bdev_nvme_attach_controller" 00:23:39.318 } 00:23:39.318 EOF 00:23:39.318 )") 00:23:39.318 00:57:31 -- nvmf/common.sh@543 -- # cat 00:23:39.318 00:57:31 -- nvmf/common.sh@545 -- # jq . 00:23:39.318 00:57:31 -- nvmf/common.sh@546 -- # IFS=, 00:23:39.318 00:57:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:39.318 "params": { 00:23:39.318 "name": "Nvme1", 00:23:39.318 "trtype": "tcp", 00:23:39.318 "traddr": "10.0.0.2", 00:23:39.318 "adrfam": "ipv4", 00:23:39.318 "trsvcid": "4420", 00:23:39.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.318 "hdgst": false, 00:23:39.318 "ddgst": false 00:23:39.318 }, 00:23:39.318 "method": "bdev_nvme_attach_controller" 00:23:39.318 }' 00:23:39.318 [2024-04-27 00:57:31.915126] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:39.318 [2024-04-27 00:57:31.915180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809315 ] 00:23:39.318 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.318 [2024-04-27 00:57:31.969776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.577 [2024-04-27 00:57:32.043288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.836 Running I/O for 1 seconds... 00:23:40.776 00:23:40.776 Latency(us) 00:23:40.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.776 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:40.776 Verification LBA range: start 0x0 length 0x4000 00:23:40.776 Nvme1n1 : 1.01 10575.92 41.31 0.00 0.00 12050.53 2322.25 28721.86 00:23:40.776 =================================================================================================================== 00:23:40.776 Total : 10575.92 41.31 0.00 0.00 12050.53 2322.25 28721.86 00:23:41.034 00:57:33 -- host/bdevperf.sh@30 -- # bdevperfpid=1809555 00:23:41.034 00:57:33 -- host/bdevperf.sh@32 -- # sleep 3 00:23:41.034 00:57:33 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:41.034 00:57:33 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:41.034 00:57:33 -- nvmf/common.sh@521 -- # config=() 00:23:41.034 00:57:33 -- nvmf/common.sh@521 -- # local subsystem config 00:23:41.034 00:57:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:41.034 00:57:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:41.034 { 00:23:41.034 "params": { 00:23:41.034 "name": "Nvme$subsystem", 00:23:41.034 "trtype": "$TEST_TRANSPORT", 00:23:41.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.034 "adrfam": "ipv4", 00:23:41.034 "trsvcid": "$NVMF_PORT", 00:23:41.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.034 "hdgst": ${hdgst:-false}, 00:23:41.034 "ddgst": ${ddgst:-false} 00:23:41.034 }, 00:23:41.034 "method": "bdev_nvme_attach_controller" 00:23:41.034 } 00:23:41.034 EOF 00:23:41.034 )") 00:23:41.034 00:57:33 -- nvmf/common.sh@543 -- # cat 00:23:41.034 00:57:33 -- nvmf/common.sh@545 -- # jq . 00:23:41.034 00:57:33 -- nvmf/common.sh@546 -- # IFS=, 00:23:41.034 00:57:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:41.034 "params": { 00:23:41.034 "name": "Nvme1", 00:23:41.034 "trtype": "tcp", 00:23:41.034 "traddr": "10.0.0.2", 00:23:41.034 "adrfam": "ipv4", 00:23:41.034 "trsvcid": "4420", 00:23:41.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.034 "hdgst": false, 00:23:41.034 "ddgst": false 00:23:41.034 }, 00:23:41.034 "method": "bdev_nvme_attach_controller" 00:23:41.034 }' 00:23:41.034 [2024-04-27 00:57:33.632148] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:41.034 [2024-04-27 00:57:33.632197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809555 ] 00:23:41.034 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.034 [2024-04-27 00:57:33.687404] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.293 [2024-04-27 00:57:33.759819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.293 Running I/O for 15 seconds... 00:23:44.619 00:57:36 -- host/bdevperf.sh@33 -- # kill -9 1809068 00:23:44.619 00:57:36 -- host/bdevperf.sh@35 -- # sleep 3 00:23:44.619 [2024-04-27 00:57:36.601696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.601731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.601757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.601775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.601794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.601812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.601833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.601852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.601870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.601885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.601900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.601915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.601930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.601946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.601961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.601975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.601990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.601998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.602005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.602023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.602039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.619 [2024-04-27 00:57:36.602054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.619 [2024-04-27 00:57:36.602324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.619 [2024-04-27 00:57:36.602330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.620 [2024-04-27 00:57:36.602896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.620 [2024-04-27 00:57:36.602912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.620 [2024-04-27 00:57:36.602920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.602927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.602935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.602941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.602949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.602955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.602964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.602970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.602980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.602987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.602995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:44.621 [2024-04-27 00:57:36.603293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.621 [2024-04-27 00:57:36.603519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.621 [2024-04-27 00:57:36.603527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.622 [2024-04-27 00:57:36.603534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.622 [2024-04-27 00:57:36.603550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.622 [2024-04-27 00:57:36.603564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.622 [2024-04-27 00:57:36.603579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.622 [2024-04-27 00:57:36.603593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.622 [2024-04-27 00:57:36.603608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.622 [2024-04-27 00:57:36.603623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.622 [2024-04-27 00:57:36.603637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.622 [2024-04-27 00:57:36.603651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:44.622 [2024-04-27 00:57:36.603666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123b10 is same with the state(5) to be set 00:23:44.622 [2024-04-27 00:57:36.603681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:44.622 [2024-04-27 00:57:36.603686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:44.622 [2024-04-27 00:57:36.603693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103576 len:8 PRP1 0x0 PRP2 0x0 00:23:44.622 [2024-04-27 00:57:36.603700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.622 [2024-04-27 00:57:36.603743] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2123b10 was disconnected and freed. reset controller. 00:23:44.622 [2024-04-27 00:57:36.606683] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.622 [2024-04-27 00:57:36.606734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.622 [2024-04-27 00:57:36.607522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.607905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.607916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.622 [2024-04-27 00:57:36.607927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.622 [2024-04-27 00:57:36.608110] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.622 [2024-04-27 00:57:36.608288] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.622 [2024-04-27 00:57:36.608296] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.622 [2024-04-27 00:57:36.608303] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.622 [2024-04-27 00:57:36.611128] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.622 [2024-04-27 00:57:36.619928] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.622 [2024-04-27 00:57:36.620582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.621057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.621068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.622 [2024-04-27 00:57:36.621083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.622 [2024-04-27 00:57:36.621276] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.622 [2024-04-27 00:57:36.621453] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.622 [2024-04-27 00:57:36.621461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.622 [2024-04-27 00:57:36.621467] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.622 [2024-04-27 00:57:36.624197] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.622 [2024-04-27 00:57:36.632742] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.622 [2024-04-27 00:57:36.633366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.633755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.633787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.622 [2024-04-27 00:57:36.633808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.622 [2024-04-27 00:57:36.634174] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.622 [2024-04-27 00:57:36.634353] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.622 [2024-04-27 00:57:36.634361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.622 [2024-04-27 00:57:36.634367] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.622 [2024-04-27 00:57:36.637083] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.622 [2024-04-27 00:57:36.645591] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.622 [2024-04-27 00:57:36.646214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.646599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.646629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.622 [2024-04-27 00:57:36.646659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.622 [2024-04-27 00:57:36.647191] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.622 [2024-04-27 00:57:36.647363] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.622 [2024-04-27 00:57:36.647371] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.622 [2024-04-27 00:57:36.647377] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.622 [2024-04-27 00:57:36.650046] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.622 [2024-04-27 00:57:36.658508] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.622 [2024-04-27 00:57:36.659137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.659581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.659612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.622 [2024-04-27 00:57:36.659633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.622 [2024-04-27 00:57:36.660080] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.622 [2024-04-27 00:57:36.660252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.622 [2024-04-27 00:57:36.660260] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.622 [2024-04-27 00:57:36.660266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.622 [2024-04-27 00:57:36.662933] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.622 [2024-04-27 00:57:36.671316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.622 [2024-04-27 00:57:36.671961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.672378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.672410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.622 [2024-04-27 00:57:36.672431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.622 [2024-04-27 00:57:36.673007] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.622 [2024-04-27 00:57:36.673213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.622 [2024-04-27 00:57:36.673221] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.622 [2024-04-27 00:57:36.673227] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.622 [2024-04-27 00:57:36.675892] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.622 [2024-04-27 00:57:36.684166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.622 [2024-04-27 00:57:36.684807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.685254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.622 [2024-04-27 00:57:36.685265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.622 [2024-04-27 00:57:36.685271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.623 [2024-04-27 00:57:36.685446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.623 [2024-04-27 00:57:36.685618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.623 [2024-04-27 00:57:36.685626] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.623 [2024-04-27 00:57:36.685632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.623 [2024-04-27 00:57:36.688302] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.623 [2024-04-27 00:57:36.697111] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.623 [2024-04-27 00:57:36.697747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.698183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.698215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.623 [2024-04-27 00:57:36.698237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.623 [2024-04-27 00:57:36.698669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.623 [2024-04-27 00:57:36.698922] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.623 [2024-04-27 00:57:36.698933] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.623 [2024-04-27 00:57:36.698942] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.623 [2024-04-27 00:57:36.702980] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.623 [2024-04-27 00:57:36.710592] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.623 [2024-04-27 00:57:36.711222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.711737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.711767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.623 [2024-04-27 00:57:36.711788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.623 [2024-04-27 00:57:36.712197] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.623 [2024-04-27 00:57:36.712369] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.623 [2024-04-27 00:57:36.712376] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.623 [2024-04-27 00:57:36.712383] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.623 [2024-04-27 00:57:36.715120] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.623 [2024-04-27 00:57:36.723403] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.623 [2024-04-27 00:57:36.723961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.724456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.724489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.623 [2024-04-27 00:57:36.724510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.623 [2024-04-27 00:57:36.725098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.623 [2024-04-27 00:57:36.725357] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.623 [2024-04-27 00:57:36.725365] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.623 [2024-04-27 00:57:36.725371] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.623 [2024-04-27 00:57:36.728035] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.623 [2024-04-27 00:57:36.736235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.623 [2024-04-27 00:57:36.736861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.737371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.737403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.623 [2024-04-27 00:57:36.737425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.623 [2024-04-27 00:57:36.737999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.623 [2024-04-27 00:57:36.738447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.623 [2024-04-27 00:57:36.738455] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.623 [2024-04-27 00:57:36.738461] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.623 [2024-04-27 00:57:36.741139] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.623 [2024-04-27 00:57:36.749137] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.623 [2024-04-27 00:57:36.749780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.750294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.750325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.623 [2024-04-27 00:57:36.750347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.623 [2024-04-27 00:57:36.750921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.623 [2024-04-27 00:57:36.751455] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.623 [2024-04-27 00:57:36.751463] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.623 [2024-04-27 00:57:36.751469] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.623 [2024-04-27 00:57:36.754134] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.623 [2024-04-27 00:57:36.761985] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.623 [2024-04-27 00:57:36.762633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.763144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.763176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.623 [2024-04-27 00:57:36.763197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.623 [2024-04-27 00:57:36.763618] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.623 [2024-04-27 00:57:36.763790] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.623 [2024-04-27 00:57:36.763800] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.623 [2024-04-27 00:57:36.763806] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.623 [2024-04-27 00:57:36.766474] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.623 [2024-04-27 00:57:36.774916] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.623 [2024-04-27 00:57:36.775579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.776097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.623 [2024-04-27 00:57:36.776129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.623 [2024-04-27 00:57:36.776150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.623 [2024-04-27 00:57:36.776630] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.623 [2024-04-27 00:57:36.776802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.623 [2024-04-27 00:57:36.776809] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.623 [2024-04-27 00:57:36.776816] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.624 [2024-04-27 00:57:36.779483] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.624 [2024-04-27 00:57:36.787774] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.624 [2024-04-27 00:57:36.788340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.788798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.788828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.624 [2024-04-27 00:57:36.788849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.624 [2024-04-27 00:57:36.789382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.624 [2024-04-27 00:57:36.789554] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.624 [2024-04-27 00:57:36.789562] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.624 [2024-04-27 00:57:36.789568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.624 [2024-04-27 00:57:36.793401] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.624 [2024-04-27 00:57:36.801461] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.624 [2024-04-27 00:57:36.802101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.802589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.802618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.624 [2024-04-27 00:57:36.802639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.624 [2024-04-27 00:57:36.803235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.624 [2024-04-27 00:57:36.803800] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.624 [2024-04-27 00:57:36.803808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.624 [2024-04-27 00:57:36.803817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.624 [2024-04-27 00:57:36.806524] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.624 [2024-04-27 00:57:36.814311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.624 [2024-04-27 00:57:36.814959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.815446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.815478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.624 [2024-04-27 00:57:36.815500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.624 [2024-04-27 00:57:36.816085] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.624 [2024-04-27 00:57:36.816616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.624 [2024-04-27 00:57:36.816624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.624 [2024-04-27 00:57:36.816630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.624 [2024-04-27 00:57:36.819322] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.624 [2024-04-27 00:57:36.827170] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.624 [2024-04-27 00:57:36.827836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.828346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.828378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.624 [2024-04-27 00:57:36.828411] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.624 [2024-04-27 00:57:36.828583] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.624 [2024-04-27 00:57:36.828754] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.624 [2024-04-27 00:57:36.828762] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.624 [2024-04-27 00:57:36.828768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.624 [2024-04-27 00:57:36.831438] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.624 [2024-04-27 00:57:36.840073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.624 [2024-04-27 00:57:36.840727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.841233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.841265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.624 [2024-04-27 00:57:36.841287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.624 [2024-04-27 00:57:36.841863] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.624 [2024-04-27 00:57:36.842060] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.624 [2024-04-27 00:57:36.842068] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.624 [2024-04-27 00:57:36.842079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.624 [2024-04-27 00:57:36.844745] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.624 [2024-04-27 00:57:36.852921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.624 [2024-04-27 00:57:36.853599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.853963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.853993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.624 [2024-04-27 00:57:36.854015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.624 [2024-04-27 00:57:36.854595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.624 [2024-04-27 00:57:36.854773] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.624 [2024-04-27 00:57:36.854781] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.624 [2024-04-27 00:57:36.854787] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.624 [2024-04-27 00:57:36.857625] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.624 [2024-04-27 00:57:36.866185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.624 [2024-04-27 00:57:36.866775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.867266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.867300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.624 [2024-04-27 00:57:36.867322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.624 [2024-04-27 00:57:36.867639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.624 [2024-04-27 00:57:36.867823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.624 [2024-04-27 00:57:36.867831] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.624 [2024-04-27 00:57:36.867837] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.624 [2024-04-27 00:57:36.870711] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.624 [2024-04-27 00:57:36.879218] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.624 [2024-04-27 00:57:36.879867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.880286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.880297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.624 [2024-04-27 00:57:36.880304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.624 [2024-04-27 00:57:36.880480] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.624 [2024-04-27 00:57:36.880657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.624 [2024-04-27 00:57:36.880664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.624 [2024-04-27 00:57:36.880671] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.624 [2024-04-27 00:57:36.883395] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.624 [2024-04-27 00:57:36.892162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.624 [2024-04-27 00:57:36.892797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.893285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.893316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.624 [2024-04-27 00:57:36.893338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.624 [2024-04-27 00:57:36.893847] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.624 [2024-04-27 00:57:36.894024] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.624 [2024-04-27 00:57:36.894032] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.624 [2024-04-27 00:57:36.894038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.624 [2024-04-27 00:57:36.896747] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.624 [2024-04-27 00:57:36.905019] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.624 [2024-04-27 00:57:36.905676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.624 [2024-04-27 00:57:36.906199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.906231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.625 [2024-04-27 00:57:36.906252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.625 [2024-04-27 00:57:36.906828] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.625 [2024-04-27 00:57:36.907083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.625 [2024-04-27 00:57:36.907091] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.625 [2024-04-27 00:57:36.907097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.625 [2024-04-27 00:57:36.909812] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.625 [2024-04-27 00:57:36.917821] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.625 [2024-04-27 00:57:36.918442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.918961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.918990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.625 [2024-04-27 00:57:36.919013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.625 [2024-04-27 00:57:36.919592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.625 [2024-04-27 00:57:36.919764] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.625 [2024-04-27 00:57:36.919772] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.625 [2024-04-27 00:57:36.919778] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.625 [2024-04-27 00:57:36.922448] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.625 [2024-04-27 00:57:36.930793] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.625 [2024-04-27 00:57:36.931420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.931864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.931874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.625 [2024-04-27 00:57:36.931881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.625 [2024-04-27 00:57:36.932053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.625 [2024-04-27 00:57:36.932250] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.625 [2024-04-27 00:57:36.932259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.625 [2024-04-27 00:57:36.932265] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.625 [2024-04-27 00:57:36.935019] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.625 [2024-04-27 00:57:36.943595] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.625 [2024-04-27 00:57:36.944221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.944731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.944761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.625 [2024-04-27 00:57:36.944782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.625 [2024-04-27 00:57:36.945372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.625 [2024-04-27 00:57:36.945954] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.625 [2024-04-27 00:57:36.945962] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.625 [2024-04-27 00:57:36.945968] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.625 [2024-04-27 00:57:36.948649] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.625 [2024-04-27 00:57:36.956382] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.625 [2024-04-27 00:57:36.957029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.957534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.957565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.625 [2024-04-27 00:57:36.957586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.625 [2024-04-27 00:57:36.958003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.625 [2024-04-27 00:57:36.958179] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.625 [2024-04-27 00:57:36.958187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.625 [2024-04-27 00:57:36.958194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.625 [2024-04-27 00:57:36.960860] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.625 [2024-04-27 00:57:36.969288] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.625 [2024-04-27 00:57:36.969863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.970347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.970386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.625 [2024-04-27 00:57:36.970408] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.625 [2024-04-27 00:57:36.970985] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.625 [2024-04-27 00:57:36.971185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.625 [2024-04-27 00:57:36.971193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.625 [2024-04-27 00:57:36.971199] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.625 [2024-04-27 00:57:36.973866] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.625 [2024-04-27 00:57:36.982195] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.625 [2024-04-27 00:57:36.982797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.983260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.983271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.625 [2024-04-27 00:57:36.983277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.625 [2024-04-27 00:57:36.983449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.625 [2024-04-27 00:57:36.983622] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.625 [2024-04-27 00:57:36.983630] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.625 [2024-04-27 00:57:36.983635] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.625 [2024-04-27 00:57:36.986266] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.625 [2024-04-27 00:57:36.995054] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.625 [2024-04-27 00:57:36.995705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.996156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:36.996187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.625 [2024-04-27 00:57:36.996208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.625 [2024-04-27 00:57:36.996785] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.625 [2024-04-27 00:57:36.996987] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.625 [2024-04-27 00:57:36.996995] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.625 [2024-04-27 00:57:36.997001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.625 [2024-04-27 00:57:36.999724] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.625 [2024-04-27 00:57:37.007900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.625 [2024-04-27 00:57:37.008488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:37.008818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:37.008848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.625 [2024-04-27 00:57:37.008876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.625 [2024-04-27 00:57:37.009337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.625 [2024-04-27 00:57:37.009509] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.625 [2024-04-27 00:57:37.009517] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.625 [2024-04-27 00:57:37.009523] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.625 [2024-04-27 00:57:37.012280] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.625 [2024-04-27 00:57:37.020970] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.625 [2024-04-27 00:57:37.021583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:37.022256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.625 [2024-04-27 00:57:37.022289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.625 [2024-04-27 00:57:37.022312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.625 [2024-04-27 00:57:37.022890] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.625 [2024-04-27 00:57:37.023302] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.626 [2024-04-27 00:57:37.023314] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.626 [2024-04-27 00:57:37.023322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.626 [2024-04-27 00:57:37.027370] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.626 [2024-04-27 00:57:37.034566] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.626 [2024-04-27 00:57:37.035228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.035671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.035701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.626 [2024-04-27 00:57:37.035722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.626 [2024-04-27 00:57:37.036000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.626 [2024-04-27 00:57:37.036188] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.626 [2024-04-27 00:57:37.036197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.626 [2024-04-27 00:57:37.036206] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.626 [2024-04-27 00:57:37.039039] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.626 [2024-04-27 00:57:37.047714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.626 [2024-04-27 00:57:37.048394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.048910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.048952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.626 [2024-04-27 00:57:37.048960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.626 [2024-04-27 00:57:37.049139] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.626 [2024-04-27 00:57:37.049311] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.626 [2024-04-27 00:57:37.049319] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.626 [2024-04-27 00:57:37.049325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.626 [2024-04-27 00:57:37.051997] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.626 [2024-04-27 00:57:37.060744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.626 [2024-04-27 00:57:37.061423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.061799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.061829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.626 [2024-04-27 00:57:37.061850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.626 [2024-04-27 00:57:37.062347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.626 [2024-04-27 00:57:37.062520] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.626 [2024-04-27 00:57:37.062528] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.626 [2024-04-27 00:57:37.062534] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.626 [2024-04-27 00:57:37.065235] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.626 [2024-04-27 00:57:37.073627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.626 [2024-04-27 00:57:37.074298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.074723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.074753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.626 [2024-04-27 00:57:37.074774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.626 [2024-04-27 00:57:37.075267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.626 [2024-04-27 00:57:37.075440] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.626 [2024-04-27 00:57:37.075447] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.626 [2024-04-27 00:57:37.075453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.626 [2024-04-27 00:57:37.078160] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.626 [2024-04-27 00:57:37.086691] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.626 [2024-04-27 00:57:37.087306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.087730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.087760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.626 [2024-04-27 00:57:37.087782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.626 [2024-04-27 00:57:37.088285] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.626 [2024-04-27 00:57:37.088461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.626 [2024-04-27 00:57:37.088468] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.626 [2024-04-27 00:57:37.088474] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.626 [2024-04-27 00:57:37.091175] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.626 [2024-04-27 00:57:37.099543] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.626 [2024-04-27 00:57:37.100193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.100632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.100662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.626 [2024-04-27 00:57:37.100685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.626 [2024-04-27 00:57:37.101193] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.626 [2024-04-27 00:57:37.101367] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.626 [2024-04-27 00:57:37.101375] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.626 [2024-04-27 00:57:37.101381] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.626 [2024-04-27 00:57:37.104049] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.626 [2024-04-27 00:57:37.112582] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.626 [2024-04-27 00:57:37.113240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.113566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.113577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.626 [2024-04-27 00:57:37.113583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.626 [2024-04-27 00:57:37.113761] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.626 [2024-04-27 00:57:37.113941] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.626 [2024-04-27 00:57:37.113949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.626 [2024-04-27 00:57:37.113955] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.626 [2024-04-27 00:57:37.116780] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.626 [2024-04-27 00:57:37.125760] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.626 [2024-04-27 00:57:37.126348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.126671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.126682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.626 [2024-04-27 00:57:37.126689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.626 [2024-04-27 00:57:37.126865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.626 [2024-04-27 00:57:37.127042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.626 [2024-04-27 00:57:37.127054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.626 [2024-04-27 00:57:37.127061] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.626 [2024-04-27 00:57:37.129941] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.626 [2024-04-27 00:57:37.138972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.626 [2024-04-27 00:57:37.139581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.140001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.626 [2024-04-27 00:57:37.140011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.626 [2024-04-27 00:57:37.140018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.626 [2024-04-27 00:57:37.140206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.626 [2024-04-27 00:57:37.140388] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.626 [2024-04-27 00:57:37.140396] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.626 [2024-04-27 00:57:37.140403] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.626 [2024-04-27 00:57:37.143292] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.626 [2024-04-27 00:57:37.152020] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.626 [2024-04-27 00:57:37.152662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.153117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.153128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.627 [2024-04-27 00:57:37.153135] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.627 [2024-04-27 00:57:37.153312] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.627 [2024-04-27 00:57:37.153489] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.627 [2024-04-27 00:57:37.153496] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.627 [2024-04-27 00:57:37.153502] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.627 [2024-04-27 00:57:37.156325] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.627 [2024-04-27 00:57:37.165138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.627 [2024-04-27 00:57:37.165762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.166211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.166222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.627 [2024-04-27 00:57:37.166229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.627 [2024-04-27 00:57:37.166406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.627 [2024-04-27 00:57:37.166582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.627 [2024-04-27 00:57:37.166590] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.627 [2024-04-27 00:57:37.166601] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.627 [2024-04-27 00:57:37.169424] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.627 [2024-04-27 00:57:37.178171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.627 [2024-04-27 00:57:37.178826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.179104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.179115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.627 [2024-04-27 00:57:37.179122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.627 [2024-04-27 00:57:37.179298] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.627 [2024-04-27 00:57:37.179475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.627 [2024-04-27 00:57:37.179483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.627 [2024-04-27 00:57:37.179489] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.627 [2024-04-27 00:57:37.182305] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.627 [2024-04-27 00:57:37.191288] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.627 [2024-04-27 00:57:37.191939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.192334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.192345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.627 [2024-04-27 00:57:37.192351] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.627 [2024-04-27 00:57:37.192527] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.627 [2024-04-27 00:57:37.192703] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.627 [2024-04-27 00:57:37.192711] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.627 [2024-04-27 00:57:37.192717] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.627 [2024-04-27 00:57:37.195540] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.627 [2024-04-27 00:57:37.204356] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.627 [2024-04-27 00:57:37.205003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.205421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.205431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.627 [2024-04-27 00:57:37.205438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.627 [2024-04-27 00:57:37.205615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.627 [2024-04-27 00:57:37.205792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.627 [2024-04-27 00:57:37.205800] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.627 [2024-04-27 00:57:37.205806] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.627 [2024-04-27 00:57:37.208631] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.627 [2024-04-27 00:57:37.217649] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.627 [2024-04-27 00:57:37.218212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.218593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.218603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.627 [2024-04-27 00:57:37.218610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.627 [2024-04-27 00:57:37.218787] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.627 [2024-04-27 00:57:37.218965] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.627 [2024-04-27 00:57:37.218973] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.627 [2024-04-27 00:57:37.218979] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.627 [2024-04-27 00:57:37.221801] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.627 [2024-04-27 00:57:37.230793] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.627 [2024-04-27 00:57:37.231438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.231864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.231874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.627 [2024-04-27 00:57:37.231882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.627 [2024-04-27 00:57:37.232060] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.627 [2024-04-27 00:57:37.232242] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.627 [2024-04-27 00:57:37.232251] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.627 [2024-04-27 00:57:37.232257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.627 [2024-04-27 00:57:37.235082] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.627 [2024-04-27 00:57:37.243902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.627 [2024-04-27 00:57:37.244495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.244891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.244901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.627 [2024-04-27 00:57:37.244908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.627 [2024-04-27 00:57:37.245091] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.627 [2024-04-27 00:57:37.245268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.627 [2024-04-27 00:57:37.245277] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.627 [2024-04-27 00:57:37.245283] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.627 [2024-04-27 00:57:37.248153] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.627 [2024-04-27 00:57:37.256937] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.627 [2024-04-27 00:57:37.257574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.258016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.258026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.627 [2024-04-27 00:57:37.258033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.627 [2024-04-27 00:57:37.258215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.627 [2024-04-27 00:57:37.258393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.627 [2024-04-27 00:57:37.258401] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.627 [2024-04-27 00:57:37.258408] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.627 [2024-04-27 00:57:37.261224] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.627 [2024-04-27 00:57:37.270032] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.627 [2024-04-27 00:57:37.270689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.271062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.627 [2024-04-27 00:57:37.271076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.628 [2024-04-27 00:57:37.271083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.628 [2024-04-27 00:57:37.271260] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.628 [2024-04-27 00:57:37.271437] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.628 [2024-04-27 00:57:37.271445] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.628 [2024-04-27 00:57:37.271451] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.628 [2024-04-27 00:57:37.274273] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.628 [2024-04-27 00:57:37.283171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.628 [2024-04-27 00:57:37.283767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.628 [2024-04-27 00:57:37.284188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.628 [2024-04-27 00:57:37.284199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.628 [2024-04-27 00:57:37.284206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.628 [2024-04-27 00:57:37.284382] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.628 [2024-04-27 00:57:37.284559] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.628 [2024-04-27 00:57:37.284567] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.628 [2024-04-27 00:57:37.284573] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.628 [2024-04-27 00:57:37.287394] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.628 [2024-04-27 00:57:37.296216] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.628 [2024-04-27 00:57:37.296868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.628 [2024-04-27 00:57:37.297242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.628 [2024-04-27 00:57:37.297253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.628 [2024-04-27 00:57:37.297260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.628 [2024-04-27 00:57:37.297437] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.628 [2024-04-27 00:57:37.297614] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.628 [2024-04-27 00:57:37.297622] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.628 [2024-04-27 00:57:37.297628] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.628 [2024-04-27 00:57:37.300447] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.886 [2024-04-27 00:57:37.309439] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.886 [2024-04-27 00:57:37.310107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.310532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.310542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.887 [2024-04-27 00:57:37.310550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.887 [2024-04-27 00:57:37.310728] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.887 [2024-04-27 00:57:37.310905] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.887 [2024-04-27 00:57:37.310913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.887 [2024-04-27 00:57:37.310919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.887 [2024-04-27 00:57:37.313739] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.887 [2024-04-27 00:57:37.322559] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.887 [2024-04-27 00:57:37.323211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.323658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.323669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.887 [2024-04-27 00:57:37.323676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.887 [2024-04-27 00:57:37.323853] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.887 [2024-04-27 00:57:37.324030] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.887 [2024-04-27 00:57:37.324038] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.887 [2024-04-27 00:57:37.324044] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.887 [2024-04-27 00:57:37.327109] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.887 [2024-04-27 00:57:37.335665] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.887 [2024-04-27 00:57:37.336344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.336773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.336786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.887 [2024-04-27 00:57:37.336794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.887 [2024-04-27 00:57:37.336971] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.887 [2024-04-27 00:57:37.337159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.887 [2024-04-27 00:57:37.337167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.887 [2024-04-27 00:57:37.337174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.887 [2024-04-27 00:57:37.339992] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.887 [2024-04-27 00:57:37.348802] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.887 [2024-04-27 00:57:37.349464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.349907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.349917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.887 [2024-04-27 00:57:37.349924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.887 [2024-04-27 00:57:37.350106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.887 [2024-04-27 00:57:37.350284] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.887 [2024-04-27 00:57:37.350292] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.887 [2024-04-27 00:57:37.350299] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.887 [2024-04-27 00:57:37.353124] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.887 [2024-04-27 00:57:37.361984] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.887 [2024-04-27 00:57:37.362593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.362973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.362983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.887 [2024-04-27 00:57:37.362990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.887 [2024-04-27 00:57:37.363178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.887 [2024-04-27 00:57:37.363360] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.887 [2024-04-27 00:57:37.363369] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.887 [2024-04-27 00:57:37.363375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.887 [2024-04-27 00:57:37.366281] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.887 [2024-04-27 00:57:37.375089] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.887 [2024-04-27 00:57:37.375670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.376081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.376091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.887 [2024-04-27 00:57:37.376101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.887 [2024-04-27 00:57:37.376278] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.887 [2024-04-27 00:57:37.376456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.887 [2024-04-27 00:57:37.376464] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.887 [2024-04-27 00:57:37.376469] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.887 [2024-04-27 00:57:37.379291] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.887 [2024-04-27 00:57:37.388271] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.887 [2024-04-27 00:57:37.388932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.389333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.389365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.887 [2024-04-27 00:57:37.389387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.887 [2024-04-27 00:57:37.389935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.887 [2024-04-27 00:57:37.390117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.887 [2024-04-27 00:57:37.390125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.887 [2024-04-27 00:57:37.390131] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.887 [2024-04-27 00:57:37.392949] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.887 [2024-04-27 00:57:37.401312] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.887 [2024-04-27 00:57:37.401972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.402391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.402402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.887 [2024-04-27 00:57:37.402409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.887 [2024-04-27 00:57:37.402586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.887 [2024-04-27 00:57:37.402763] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.887 [2024-04-27 00:57:37.402770] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.887 [2024-04-27 00:57:37.402777] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.887 [2024-04-27 00:57:37.405559] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.887 [2024-04-27 00:57:37.414439] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.887 [2024-04-27 00:57:37.415118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.415624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.415654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.887 [2024-04-27 00:57:37.415675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.887 [2024-04-27 00:57:37.416198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.887 [2024-04-27 00:57:37.416383] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.887 [2024-04-27 00:57:37.416391] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.887 [2024-04-27 00:57:37.416397] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.887 [2024-04-27 00:57:37.419063] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.887 [2024-04-27 00:57:37.427212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.887 [2024-04-27 00:57:37.427842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.428259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.428290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.887 [2024-04-27 00:57:37.428312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.887 [2024-04-27 00:57:37.428888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.887 [2024-04-27 00:57:37.429131] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.887 [2024-04-27 00:57:37.429139] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.887 [2024-04-27 00:57:37.429145] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.887 [2024-04-27 00:57:37.431748] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.887 [2024-04-27 00:57:37.440065] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.887 [2024-04-27 00:57:37.440709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.887 [2024-04-27 00:57:37.441121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.441160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.441167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.888 [2024-04-27 00:57:37.441339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.888 [2024-04-27 00:57:37.441510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.888 [2024-04-27 00:57:37.441518] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.888 [2024-04-27 00:57:37.441524] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.888 [2024-04-27 00:57:37.444164] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.888 [2024-04-27 00:57:37.452874] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.888 [2024-04-27 00:57:37.453295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.453786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.453816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.453837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.888 [2024-04-27 00:57:37.454379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.888 [2024-04-27 00:57:37.454556] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.888 [2024-04-27 00:57:37.454564] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.888 [2024-04-27 00:57:37.454570] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.888 [2024-04-27 00:57:37.457243] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.888 [2024-04-27 00:57:37.465728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.888 [2024-04-27 00:57:37.466379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.466796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.466827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.466849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.888 [2024-04-27 00:57:37.467090] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.888 [2024-04-27 00:57:37.467305] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.888 [2024-04-27 00:57:37.467316] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.888 [2024-04-27 00:57:37.467325] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.888 [2024-04-27 00:57:37.471364] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.888 [2024-04-27 00:57:37.479189] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.888 [2024-04-27 00:57:37.479771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.480159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.480192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.480213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.888 [2024-04-27 00:57:37.480789] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.888 [2024-04-27 00:57:37.481093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.888 [2024-04-27 00:57:37.481102] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.888 [2024-04-27 00:57:37.481108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.888 [2024-04-27 00:57:37.483802] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.888 [2024-04-27 00:57:37.492048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.888 [2024-04-27 00:57:37.492580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.493005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.493034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.493055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.888 [2024-04-27 00:57:37.493447] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.888 [2024-04-27 00:57:37.493618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.888 [2024-04-27 00:57:37.493629] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.888 [2024-04-27 00:57:37.493635] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.888 [2024-04-27 00:57:37.496332] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.888 [2024-04-27 00:57:37.504845] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.888 [2024-04-27 00:57:37.505512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.505952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.505982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.506003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.888 [2024-04-27 00:57:37.506299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.888 [2024-04-27 00:57:37.506476] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.888 [2024-04-27 00:57:37.506484] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.888 [2024-04-27 00:57:37.506491] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.888 [2024-04-27 00:57:37.509194] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.888 [2024-04-27 00:57:37.517761] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.888 [2024-04-27 00:57:37.518332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.518829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.518859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.518880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.888 [2024-04-27 00:57:37.519332] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.888 [2024-04-27 00:57:37.519504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.888 [2024-04-27 00:57:37.519512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.888 [2024-04-27 00:57:37.519518] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.888 [2024-04-27 00:57:37.522188] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.888 [2024-04-27 00:57:37.530588] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.888 [2024-04-27 00:57:37.531209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.531661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.531670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.531677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.888 [2024-04-27 00:57:37.531849] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.888 [2024-04-27 00:57:37.532021] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.888 [2024-04-27 00:57:37.532029] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.888 [2024-04-27 00:57:37.532038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.888 [2024-04-27 00:57:37.534819] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.888 [2024-04-27 00:57:37.543498] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.888 [2024-04-27 00:57:37.544146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.544630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.544660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.544682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.888 [2024-04-27 00:57:37.545200] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.888 [2024-04-27 00:57:37.545372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.888 [2024-04-27 00:57:37.545379] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.888 [2024-04-27 00:57:37.545387] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.888 [2024-04-27 00:57:37.548165] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.888 [2024-04-27 00:57:37.556576] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.888 [2024-04-27 00:57:37.557223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.557715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.557745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.557766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.888 [2024-04-27 00:57:37.558356] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.888 [2024-04-27 00:57:37.558720] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.888 [2024-04-27 00:57:37.558731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.888 [2024-04-27 00:57:37.558740] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.888 [2024-04-27 00:57:37.562779] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:44.888 [2024-04-27 00:57:37.570258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:44.888 [2024-04-27 00:57:37.570892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.571312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:44.888 [2024-04-27 00:57:37.571324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:44.888 [2024-04-27 00:57:37.571330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:44.889 [2024-04-27 00:57:37.571502] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:44.889 [2024-04-27 00:57:37.571674] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:44.889 [2024-04-27 00:57:37.571681] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:44.889 [2024-04-27 00:57:37.571688] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:44.889 [2024-04-27 00:57:37.574429] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.149 [2024-04-27 00:57:37.583523] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.149 [2024-04-27 00:57:37.584186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.584646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.584679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.149 [2024-04-27 00:57:37.584701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.149 [2024-04-27 00:57:37.585297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.149 [2024-04-27 00:57:37.585678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.149 [2024-04-27 00:57:37.585686] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.149 [2024-04-27 00:57:37.585693] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.149 [2024-04-27 00:57:37.588529] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.149 [2024-04-27 00:57:37.596398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.149 [2024-04-27 00:57:37.597063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.597549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.597580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.149 [2024-04-27 00:57:37.597610] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.149 [2024-04-27 00:57:37.597781] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.149 [2024-04-27 00:57:37.597952] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.149 [2024-04-27 00:57:37.597960] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.149 [2024-04-27 00:57:37.597966] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.149 [2024-04-27 00:57:37.600699] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.149 [2024-04-27 00:57:37.609278] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.149 [2024-04-27 00:57:37.609843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.610043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.610085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.149 [2024-04-27 00:57:37.610111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.149 [2024-04-27 00:57:37.610631] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.149 [2024-04-27 00:57:37.610803] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.149 [2024-04-27 00:57:37.610811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.149 [2024-04-27 00:57:37.610817] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.149 [2024-04-27 00:57:37.613552] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.149 [2024-04-27 00:57:37.622414] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.149 [2024-04-27 00:57:37.623015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.623440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.623472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.149 [2024-04-27 00:57:37.623494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.149 [2024-04-27 00:57:37.623834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.149 [2024-04-27 00:57:37.624006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.149 [2024-04-27 00:57:37.624014] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.149 [2024-04-27 00:57:37.624020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.149 [2024-04-27 00:57:37.626740] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.149 [2024-04-27 00:57:37.635389] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.149 [2024-04-27 00:57:37.636048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.636459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.636491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.149 [2024-04-27 00:57:37.636512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.149 [2024-04-27 00:57:37.637112] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.149 [2024-04-27 00:57:37.637675] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.149 [2024-04-27 00:57:37.637683] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.149 [2024-04-27 00:57:37.637689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.149 [2024-04-27 00:57:37.640397] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.149 [2024-04-27 00:57:37.648274] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.149 [2024-04-27 00:57:37.648917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.649388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.649419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.149 [2024-04-27 00:57:37.649441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.149 [2024-04-27 00:57:37.649841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.149 [2024-04-27 00:57:37.650097] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.149 [2024-04-27 00:57:37.650109] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.149 [2024-04-27 00:57:37.650118] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.149 [2024-04-27 00:57:37.654158] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.149 [2024-04-27 00:57:37.661789] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.149 [2024-04-27 00:57:37.662469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.662901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.662931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.149 [2024-04-27 00:57:37.662952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.149 [2024-04-27 00:57:37.663481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.149 [2024-04-27 00:57:37.663654] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.149 [2024-04-27 00:57:37.663662] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.149 [2024-04-27 00:57:37.663668] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.149 [2024-04-27 00:57:37.666405] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.149 [2024-04-27 00:57:37.674695] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.149 [2024-04-27 00:57:37.675069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.675482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.149 [2024-04-27 00:57:37.675492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.149 [2024-04-27 00:57:37.675499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.149 [2024-04-27 00:57:37.675670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.149 [2024-04-27 00:57:37.675842] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.149 [2024-04-27 00:57:37.675850] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.150 [2024-04-27 00:57:37.675856] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.150 [2024-04-27 00:57:37.678644] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.150 [2024-04-27 00:57:37.687536] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.150 [2024-04-27 00:57:37.688214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.688669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.688699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.150 [2024-04-27 00:57:37.688721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.150 [2024-04-27 00:57:37.689313] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.150 [2024-04-27 00:57:37.689850] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.150 [2024-04-27 00:57:37.689860] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.150 [2024-04-27 00:57:37.689869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.150 [2024-04-27 00:57:37.693909] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.150 [2024-04-27 00:57:37.701450] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.150 [2024-04-27 00:57:37.702079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.702473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.702511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.150 [2024-04-27 00:57:37.702532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.150 [2024-04-27 00:57:37.703005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.150 [2024-04-27 00:57:37.703180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.150 [2024-04-27 00:57:37.703188] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.150 [2024-04-27 00:57:37.703194] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.150 [2024-04-27 00:57:37.705929] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.150 [2024-04-27 00:57:37.714230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.150 [2024-04-27 00:57:37.714919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.715384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.715417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.150 [2024-04-27 00:57:37.715440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.150 [2024-04-27 00:57:37.716007] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.150 [2024-04-27 00:57:37.716182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.150 [2024-04-27 00:57:37.716190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.150 [2024-04-27 00:57:37.716196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.150 [2024-04-27 00:57:37.718862] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.150 [2024-04-27 00:57:37.727147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.150 [2024-04-27 00:57:37.727818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.728310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.728343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.150 [2024-04-27 00:57:37.728364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.150 [2024-04-27 00:57:37.728684] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.150 [2024-04-27 00:57:37.728856] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.150 [2024-04-27 00:57:37.728864] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.150 [2024-04-27 00:57:37.728869] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.150 [2024-04-27 00:57:37.731489] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.150 [2024-04-27 00:57:37.739935] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.150 [2024-04-27 00:57:37.740493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.740984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.741013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.150 [2024-04-27 00:57:37.741042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.150 [2024-04-27 00:57:37.741635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.150 [2024-04-27 00:57:37.742097] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.150 [2024-04-27 00:57:37.742105] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.150 [2024-04-27 00:57:37.742111] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.150 [2024-04-27 00:57:37.744777] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.150 [2024-04-27 00:57:37.752756] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.150 [2024-04-27 00:57:37.753336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.753806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.753836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.150 [2024-04-27 00:57:37.753857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.150 [2024-04-27 00:57:37.754450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.150 [2024-04-27 00:57:37.754920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.150 [2024-04-27 00:57:37.754928] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.150 [2024-04-27 00:57:37.754934] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.150 [2024-04-27 00:57:37.757605] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.150 [2024-04-27 00:57:37.765531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.150 [2024-04-27 00:57:37.766165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.766666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.766696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.150 [2024-04-27 00:57:37.766718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.150 [2024-04-27 00:57:37.767012] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.150 [2024-04-27 00:57:37.767201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.150 [2024-04-27 00:57:37.767209] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.150 [2024-04-27 00:57:37.767215] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.150 [2024-04-27 00:57:37.769885] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.150 [2024-04-27 00:57:37.778431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.150 [2024-04-27 00:57:37.779029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.779544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.779575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.150 [2024-04-27 00:57:37.779598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.150 [2024-04-27 00:57:37.780078] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.150 [2024-04-27 00:57:37.780271] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.150 [2024-04-27 00:57:37.780279] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.150 [2024-04-27 00:57:37.780285] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.150 [2024-04-27 00:57:37.782944] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.150 [2024-04-27 00:57:37.791212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.150 [2024-04-27 00:57:37.791880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.792256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.792287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.150 [2024-04-27 00:57:37.792308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.150 [2024-04-27 00:57:37.792884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.150 [2024-04-27 00:57:37.793111] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.150 [2024-04-27 00:57:37.793120] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.150 [2024-04-27 00:57:37.793126] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.150 [2024-04-27 00:57:37.795728] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.150 [2024-04-27 00:57:37.804046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.150 [2024-04-27 00:57:37.804663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.150 [2024-04-27 00:57:37.805140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.151 [2024-04-27 00:57:37.805174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.151 [2024-04-27 00:57:37.805195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.151 [2024-04-27 00:57:37.805772] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.151 [2024-04-27 00:57:37.806265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.151 [2024-04-27 00:57:37.806273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.151 [2024-04-27 00:57:37.806278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.151 [2024-04-27 00:57:37.808949] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.151 [2024-04-27 00:57:37.816914] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.151 [2024-04-27 00:57:37.817585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.151 [2024-04-27 00:57:37.817998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.151 [2024-04-27 00:57:37.818028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.151 [2024-04-27 00:57:37.818050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.151 [2024-04-27 00:57:37.818638] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.151 [2024-04-27 00:57:37.819075] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.151 [2024-04-27 00:57:37.819083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.151 [2024-04-27 00:57:37.819089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.151 [2024-04-27 00:57:37.821748] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.151 [2024-04-27 00:57:37.829702] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.151 [2024-04-27 00:57:37.830347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.151 [2024-04-27 00:57:37.830811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.151 [2024-04-27 00:57:37.830841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.151 [2024-04-27 00:57:37.830862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.151 [2024-04-27 00:57:37.831047] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.151 [2024-04-27 00:57:37.831236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.151 [2024-04-27 00:57:37.831245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.151 [2024-04-27 00:57:37.831250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.151 [2024-04-27 00:57:37.835179] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.151 [2024-04-27 00:57:37.843275] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.411 [2024-04-27 00:57:37.843985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.411 [2024-04-27 00:57:37.844408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.411 [2024-04-27 00:57:37.844446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.411 [2024-04-27 00:57:37.844470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.411 [2024-04-27 00:57:37.845050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.411 [2024-04-27 00:57:37.845327] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.411 [2024-04-27 00:57:37.845335] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.411 [2024-04-27 00:57:37.845341] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.411 [2024-04-27 00:57:37.848120] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.411 [2024-04-27 00:57:37.856294] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.411 [2024-04-27 00:57:37.856889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.411 [2024-04-27 00:57:37.857362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.411 [2024-04-27 00:57:37.857395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.411 [2024-04-27 00:57:37.857417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.411 [2024-04-27 00:57:37.857946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.411 [2024-04-27 00:57:37.858128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.411 [2024-04-27 00:57:37.858140] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.411 [2024-04-27 00:57:37.858146] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.411 [2024-04-27 00:57:37.860816] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.411 [2024-04-27 00:57:37.869157] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.411 [2024-04-27 00:57:37.869821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.411 [2024-04-27 00:57:37.870234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.411 [2024-04-27 00:57:37.870269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.411 [2024-04-27 00:57:37.870291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.411 [2024-04-27 00:57:37.870483] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.411 [2024-04-27 00:57:37.870659] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.411 [2024-04-27 00:57:37.870667] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.411 [2024-04-27 00:57:37.870673] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.411 [2024-04-27 00:57:37.873507] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.411 [2024-04-27 00:57:37.881979] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.411 [2024-04-27 00:57:37.882643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.411 [2024-04-27 00:57:37.882798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.411 [2024-04-27 00:57:37.882808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.411 [2024-04-27 00:57:37.882815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.411 [2024-04-27 00:57:37.882986] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.411 [2024-04-27 00:57:37.883162] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.412 [2024-04-27 00:57:37.883170] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.412 [2024-04-27 00:57:37.883176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.412 [2024-04-27 00:57:37.885847] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.412 [2024-04-27 00:57:37.894783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.412 [2024-04-27 00:57:37.895438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.895900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.895930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.412 [2024-04-27 00:57:37.895952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.412 [2024-04-27 00:57:37.896243] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.412 [2024-04-27 00:57:37.896416] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.412 [2024-04-27 00:57:37.896423] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.412 [2024-04-27 00:57:37.896433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.412 [2024-04-27 00:57:37.899165] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.412 [2024-04-27 00:57:37.907685] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.412 [2024-04-27 00:57:37.908306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.908802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.908833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.412 [2024-04-27 00:57:37.908855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.412 [2024-04-27 00:57:37.909432] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.412 [2024-04-27 00:57:37.909605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.412 [2024-04-27 00:57:37.909613] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.412 [2024-04-27 00:57:37.909619] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.412 [2024-04-27 00:57:37.912288] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.412 [2024-04-27 00:57:37.920502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.412 [2024-04-27 00:57:37.921117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.921536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.921566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.412 [2024-04-27 00:57:37.921588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.412 [2024-04-27 00:57:37.922057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.412 [2024-04-27 00:57:37.922246] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.412 [2024-04-27 00:57:37.922255] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.412 [2024-04-27 00:57:37.922261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.412 [2024-04-27 00:57:37.926063] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.412 [2024-04-27 00:57:37.934293] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.412 [2024-04-27 00:57:37.934942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.935290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.935322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.412 [2024-04-27 00:57:37.935344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.412 [2024-04-27 00:57:37.935919] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.412 [2024-04-27 00:57:37.936427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.412 [2024-04-27 00:57:37.936436] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.412 [2024-04-27 00:57:37.936441] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.412 [2024-04-27 00:57:37.939108] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.412 [2024-04-27 00:57:37.947193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.412 [2024-04-27 00:57:37.947832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.948122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.948155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.412 [2024-04-27 00:57:37.948177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.412 [2024-04-27 00:57:37.948531] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.412 [2024-04-27 00:57:37.948703] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.412 [2024-04-27 00:57:37.948710] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.412 [2024-04-27 00:57:37.948716] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.412 [2024-04-27 00:57:37.951388] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.412 [2024-04-27 00:57:37.959998] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.412 [2024-04-27 00:57:37.960673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.961110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.961142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.412 [2024-04-27 00:57:37.961164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.412 [2024-04-27 00:57:37.961643] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.412 [2024-04-27 00:57:37.961814] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.412 [2024-04-27 00:57:37.961822] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.412 [2024-04-27 00:57:37.961828] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.412 [2024-04-27 00:57:37.964501] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.412 [2024-04-27 00:57:37.972894] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.412 [2024-04-27 00:57:37.973559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.973979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.974009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.412 [2024-04-27 00:57:37.974030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.412 [2024-04-27 00:57:37.974504] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.412 [2024-04-27 00:57:37.974676] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.412 [2024-04-27 00:57:37.974684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.412 [2024-04-27 00:57:37.974690] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.412 [2024-04-27 00:57:37.977359] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.412 [2024-04-27 00:57:37.985715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.412 [2024-04-27 00:57:37.986349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.986720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.412 [2024-04-27 00:57:37.986751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.412 [2024-04-27 00:57:37.986772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.412 [2024-04-27 00:57:37.987291] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.412 [2024-04-27 00:57:37.987463] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.412 [2024-04-27 00:57:37.987471] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.412 [2024-04-27 00:57:37.987477] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.412 [2024-04-27 00:57:37.990203] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.412 [2024-04-27 00:57:37.998652] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.413 [2024-04-27 00:57:37.999330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:37.999746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:37.999776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.413 [2024-04-27 00:57:37.999797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.413 [2024-04-27 00:57:38.000300] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.413 [2024-04-27 00:57:38.000472] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.413 [2024-04-27 00:57:38.000480] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.413 [2024-04-27 00:57:38.000486] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.413 [2024-04-27 00:57:38.003157] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.413 [2024-04-27 00:57:38.011520] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.413 [2024-04-27 00:57:38.012188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.012598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.012628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.413 [2024-04-27 00:57:38.012648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.413 [2024-04-27 00:57:38.012835] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.413 [2024-04-27 00:57:38.012997] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.413 [2024-04-27 00:57:38.013004] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.413 [2024-04-27 00:57:38.013010] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.413 [2024-04-27 00:57:38.015695] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.413 [2024-04-27 00:57:38.024418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.413 [2024-04-27 00:57:38.025066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.025582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.025612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.413 [2024-04-27 00:57:38.025633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.413 [2024-04-27 00:57:38.025873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.413 [2024-04-27 00:57:38.026045] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.413 [2024-04-27 00:57:38.026053] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.413 [2024-04-27 00:57:38.026059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.413 [2024-04-27 00:57:38.028728] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.413 [2024-04-27 00:57:38.037277] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.413 [2024-04-27 00:57:38.037924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.038414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.038447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.413 [2024-04-27 00:57:38.038454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.413 [2024-04-27 00:57:38.038626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.413 [2024-04-27 00:57:38.038797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.413 [2024-04-27 00:57:38.038805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.413 [2024-04-27 00:57:38.038811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.413 [2024-04-27 00:57:38.041434] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.413 [2024-04-27 00:57:38.050117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.413 [2024-04-27 00:57:38.050785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.051228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.051239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.413 [2024-04-27 00:57:38.051246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.413 [2024-04-27 00:57:38.051418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.413 [2024-04-27 00:57:38.051589] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.413 [2024-04-27 00:57:38.051597] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.413 [2024-04-27 00:57:38.051603] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.413 [2024-04-27 00:57:38.054242] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.413 [2024-04-27 00:57:38.062984] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.413 [2024-04-27 00:57:38.063633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.064091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.064135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.413 [2024-04-27 00:57:38.064157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.413 [2024-04-27 00:57:38.064435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.413 [2024-04-27 00:57:38.064688] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.413 [2024-04-27 00:57:38.064698] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.413 [2024-04-27 00:57:38.064707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.413 [2024-04-27 00:57:38.068748] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.413 [2024-04-27 00:57:38.076405] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.413 [2024-04-27 00:57:38.077091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.077343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.077373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.413 [2024-04-27 00:57:38.077394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.413 [2024-04-27 00:57:38.077625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.413 [2024-04-27 00:57:38.077797] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.413 [2024-04-27 00:57:38.077804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.413 [2024-04-27 00:57:38.077810] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.413 [2024-04-27 00:57:38.080603] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.413 [2024-04-27 00:57:38.089231] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.413 [2024-04-27 00:57:38.089876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.090037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.090047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.413 [2024-04-27 00:57:38.090053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.413 [2024-04-27 00:57:38.090245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.413 [2024-04-27 00:57:38.090416] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.413 [2024-04-27 00:57:38.090424] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.413 [2024-04-27 00:57:38.090431] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.413 [2024-04-27 00:57:38.093099] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.413 [2024-04-27 00:57:38.102225] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.413 [2024-04-27 00:57:38.102901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.103357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.413 [2024-04-27 00:57:38.103369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.413 [2024-04-27 00:57:38.103380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.413 [2024-04-27 00:57:38.103558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.413 [2024-04-27 00:57:38.103747] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.413 [2024-04-27 00:57:38.103760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.413 [2024-04-27 00:57:38.103767] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.674 [2024-04-27 00:57:38.106647] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.674 [2024-04-27 00:57:38.115228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.674 [2024-04-27 00:57:38.115876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.116398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.116431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.674 [2024-04-27 00:57:38.116454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.674 [2024-04-27 00:57:38.116864] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.674 [2024-04-27 00:57:38.117037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.674 [2024-04-27 00:57:38.117044] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.674 [2024-04-27 00:57:38.117051] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.674 [2024-04-27 00:57:38.119778] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.674 [2024-04-27 00:57:38.128290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.674 [2024-04-27 00:57:38.128907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.129125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.129136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.674 [2024-04-27 00:57:38.129142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.674 [2024-04-27 00:57:38.129314] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.674 [2024-04-27 00:57:38.129485] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.674 [2024-04-27 00:57:38.129493] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.674 [2024-04-27 00:57:38.129499] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.674 [2024-04-27 00:57:38.132255] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.674 [2024-04-27 00:57:38.141141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.674 [2024-04-27 00:57:38.141653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.142126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.142159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.674 [2024-04-27 00:57:38.142180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.674 [2024-04-27 00:57:38.142718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.674 [2024-04-27 00:57:38.142890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.674 [2024-04-27 00:57:38.142898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.674 [2024-04-27 00:57:38.142904] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.674 [2024-04-27 00:57:38.145619] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.674 [2024-04-27 00:57:38.154048] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.674 [2024-04-27 00:57:38.154658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.155110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.155142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.674 [2024-04-27 00:57:38.155163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.674 [2024-04-27 00:57:38.155739] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.674 [2024-04-27 00:57:38.156205] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.674 [2024-04-27 00:57:38.156216] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.674 [2024-04-27 00:57:38.156226] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.674 [2024-04-27 00:57:38.160269] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.674 [2024-04-27 00:57:38.167334] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.674 [2024-04-27 00:57:38.167991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.168434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.168466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.674 [2024-04-27 00:57:38.168487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.674 [2024-04-27 00:57:38.168951] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.674 [2024-04-27 00:57:38.169130] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.674 [2024-04-27 00:57:38.169138] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.674 [2024-04-27 00:57:38.169145] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.674 [2024-04-27 00:57:38.171962] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.674 [2024-04-27 00:57:38.180337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.674 [2024-04-27 00:57:38.180984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.181427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.181438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.674 [2024-04-27 00:57:38.181445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.674 [2024-04-27 00:57:38.181616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.674 [2024-04-27 00:57:38.181791] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.674 [2024-04-27 00:57:38.181798] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.674 [2024-04-27 00:57:38.181804] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.674 [2024-04-27 00:57:38.184540] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.674 [2024-04-27 00:57:38.193117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.674 [2024-04-27 00:57:38.193754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.194229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.194261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.674 [2024-04-27 00:57:38.194283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.674 [2024-04-27 00:57:38.194860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.674 [2024-04-27 00:57:38.195180] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.674 [2024-04-27 00:57:38.195192] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.674 [2024-04-27 00:57:38.195201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.674 [2024-04-27 00:57:38.199243] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.674 [2024-04-27 00:57:38.206722] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.674 [2024-04-27 00:57:38.207362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.207815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.674 [2024-04-27 00:57:38.207845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.674 [2024-04-27 00:57:38.207866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.674 [2024-04-27 00:57:38.208456] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.674 [2024-04-27 00:57:38.209035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.674 [2024-04-27 00:57:38.209058] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.674 [2024-04-27 00:57:38.209087] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.674 [2024-04-27 00:57:38.211810] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.674 [2024-04-27 00:57:38.219499] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.675 [2024-04-27 00:57:38.220122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.220596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.220627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.675 [2024-04-27 00:57:38.220650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.675 [2024-04-27 00:57:38.221129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.675 [2024-04-27 00:57:38.221302] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.675 [2024-04-27 00:57:38.221313] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.675 [2024-04-27 00:57:38.221319] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.675 [2024-04-27 00:57:38.223984] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.675 [2024-04-27 00:57:38.232361] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.675 [2024-04-27 00:57:38.232999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.233452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.233463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.675 [2024-04-27 00:57:38.233470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.675 [2024-04-27 00:57:38.233640] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.675 [2024-04-27 00:57:38.233811] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.675 [2024-04-27 00:57:38.233819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.675 [2024-04-27 00:57:38.233825] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.675 [2024-04-27 00:57:38.236492] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.675 [2024-04-27 00:57:38.245246] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.675 [2024-04-27 00:57:38.245893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.246404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.246447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.675 [2024-04-27 00:57:38.246468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.675 [2024-04-27 00:57:38.246993] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.675 [2024-04-27 00:57:38.247167] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.675 [2024-04-27 00:57:38.247176] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.675 [2024-04-27 00:57:38.247181] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.675 [2024-04-27 00:57:38.249892] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.675 [2024-04-27 00:57:38.258170] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.675 [2024-04-27 00:57:38.258801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.259273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.259305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.675 [2024-04-27 00:57:38.259326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.675 [2024-04-27 00:57:38.259902] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.675 [2024-04-27 00:57:38.260090] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.675 [2024-04-27 00:57:38.260098] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.675 [2024-04-27 00:57:38.260107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.675 [2024-04-27 00:57:38.262711] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.675 [2024-04-27 00:57:38.271047] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.675 [2024-04-27 00:57:38.271683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.272201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.272233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.675 [2024-04-27 00:57:38.272254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.675 [2024-04-27 00:57:38.272829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.675 [2024-04-27 00:57:38.273049] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.675 [2024-04-27 00:57:38.273056] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.675 [2024-04-27 00:57:38.273062] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.675 [2024-04-27 00:57:38.275733] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.675 [2024-04-27 00:57:38.283956] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.675 [2024-04-27 00:57:38.284623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.285134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.285165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.675 [2024-04-27 00:57:38.285186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.675 [2024-04-27 00:57:38.285762] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.675 [2024-04-27 00:57:38.286350] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.675 [2024-04-27 00:57:38.286375] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.675 [2024-04-27 00:57:38.286404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.675 [2024-04-27 00:57:38.290472] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.675 [2024-04-27 00:57:38.297723] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.675 [2024-04-27 00:57:38.298381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.298880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.298911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.675 [2024-04-27 00:57:38.298932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.675 [2024-04-27 00:57:38.299412] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.675 [2024-04-27 00:57:38.299585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.675 [2024-04-27 00:57:38.299593] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.675 [2024-04-27 00:57:38.299599] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.675 [2024-04-27 00:57:38.302335] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.675 [2024-04-27 00:57:38.310570] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.675 [2024-04-27 00:57:38.311149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.311626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.311656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.675 [2024-04-27 00:57:38.311677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.675 [2024-04-27 00:57:38.312038] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.675 [2024-04-27 00:57:38.312213] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.675 [2024-04-27 00:57:38.312222] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.675 [2024-04-27 00:57:38.312228] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.675 [2024-04-27 00:57:38.314957] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.675 [2024-04-27 00:57:38.323469] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.675 [2024-04-27 00:57:38.324137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.324606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.324636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.675 [2024-04-27 00:57:38.324658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.675 [2024-04-27 00:57:38.325248] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.675 [2024-04-27 00:57:38.325542] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.675 [2024-04-27 00:57:38.325550] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.675 [2024-04-27 00:57:38.325556] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.675 [2024-04-27 00:57:38.328255] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.675 [2024-04-27 00:57:38.336316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.675 [2024-04-27 00:57:38.336966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.337457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.675 [2024-04-27 00:57:38.337490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.675 [2024-04-27 00:57:38.337511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.676 [2024-04-27 00:57:38.338107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.676 [2024-04-27 00:57:38.338507] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.676 [2024-04-27 00:57:38.338516] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.676 [2024-04-27 00:57:38.338522] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.676 [2024-04-27 00:57:38.341189] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.676 [2024-04-27 00:57:38.349317] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.676 [2024-04-27 00:57:38.349967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.676 [2024-04-27 00:57:38.350611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.676 [2024-04-27 00:57:38.350643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.676 [2024-04-27 00:57:38.350664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.676 [2024-04-27 00:57:38.351252] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.676 [2024-04-27 00:57:38.351618] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.676 [2024-04-27 00:57:38.351626] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.676 [2024-04-27 00:57:38.351632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.676 [2024-04-27 00:57:38.354346] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.676 [2024-04-27 00:57:38.362180] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.676 [2024-04-27 00:57:38.362806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.676 [2024-04-27 00:57:38.363254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.676 [2024-04-27 00:57:38.363286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.676 [2024-04-27 00:57:38.363308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.676 [2024-04-27 00:57:38.363884] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.676 [2024-04-27 00:57:38.364518] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.676 [2024-04-27 00:57:38.364559] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.676 [2024-04-27 00:57:38.364584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.676 [2024-04-27 00:57:38.367527] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.937 [2024-04-27 00:57:38.375359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.937 [2024-04-27 00:57:38.375965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.376470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.376504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.937 [2024-04-27 00:57:38.376527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.937 [2024-04-27 00:57:38.377122] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.937 [2024-04-27 00:57:38.377376] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.937 [2024-04-27 00:57:38.377386] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.937 [2024-04-27 00:57:38.377394] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.937 [2024-04-27 00:57:38.381449] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.937 [2024-04-27 00:57:38.389182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.937 [2024-04-27 00:57:38.389761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.390134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.390146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.937 [2024-04-27 00:57:38.390153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.937 [2024-04-27 00:57:38.390330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.937 [2024-04-27 00:57:38.390507] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.937 [2024-04-27 00:57:38.390515] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.937 [2024-04-27 00:57:38.390521] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.937 [2024-04-27 00:57:38.393340] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.937 [2024-04-27 00:57:38.402323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.937 [2024-04-27 00:57:38.402974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.403344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.403355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.937 [2024-04-27 00:57:38.403362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.937 [2024-04-27 00:57:38.403538] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.937 [2024-04-27 00:57:38.403715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.937 [2024-04-27 00:57:38.403723] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.937 [2024-04-27 00:57:38.403729] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.937 [2024-04-27 00:57:38.406550] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.937 [2024-04-27 00:57:38.415365] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.937 [2024-04-27 00:57:38.415994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.416364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.416375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.937 [2024-04-27 00:57:38.416382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.937 [2024-04-27 00:57:38.416558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.937 [2024-04-27 00:57:38.416734] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.937 [2024-04-27 00:57:38.416742] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.937 [2024-04-27 00:57:38.416748] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.937 [2024-04-27 00:57:38.419571] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.937 [2024-04-27 00:57:38.428417] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.937 [2024-04-27 00:57:38.429001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.429510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.429550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.937 [2024-04-27 00:57:38.429573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.937 [2024-04-27 00:57:38.430162] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.937 [2024-04-27 00:57:38.430579] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.937 [2024-04-27 00:57:38.430588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.937 [2024-04-27 00:57:38.430594] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.937 [2024-04-27 00:57:38.433421] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.937 [2024-04-27 00:57:38.441604] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.937 [2024-04-27 00:57:38.442240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.442738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.442768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.937 [2024-04-27 00:57:38.442790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.937 [2024-04-27 00:57:38.443013] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.937 [2024-04-27 00:57:38.443196] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.937 [2024-04-27 00:57:38.443205] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.937 [2024-04-27 00:57:38.443211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.937 [2024-04-27 00:57:38.446030] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.937 [2024-04-27 00:57:38.454627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.937 [2024-04-27 00:57:38.455281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.455781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.455811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.937 [2024-04-27 00:57:38.455832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.937 [2024-04-27 00:57:38.456419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.937 [2024-04-27 00:57:38.456848] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.937 [2024-04-27 00:57:38.456856] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.937 [2024-04-27 00:57:38.456862] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.937 [2024-04-27 00:57:38.459685] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.937 [2024-04-27 00:57:38.467493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.937 [2024-04-27 00:57:38.468115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.468472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.937 [2024-04-27 00:57:38.468508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.937 [2024-04-27 00:57:38.468518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.937 [2024-04-27 00:57:38.468690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.937 [2024-04-27 00:57:38.468863] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.938 [2024-04-27 00:57:38.468870] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.938 [2024-04-27 00:57:38.468876] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.938 [2024-04-27 00:57:38.472800] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.938 [2024-04-27 00:57:38.481003] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.938 [2024-04-27 00:57:38.481652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.482102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.482134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.938 [2024-04-27 00:57:38.482155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.938 [2024-04-27 00:57:38.482734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.938 [2024-04-27 00:57:38.482906] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.938 [2024-04-27 00:57:38.482914] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.938 [2024-04-27 00:57:38.482920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.938 [2024-04-27 00:57:38.485632] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.938 [2024-04-27 00:57:38.494003] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.938 [2024-04-27 00:57:38.494659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.495098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.495130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.938 [2024-04-27 00:57:38.495151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.938 [2024-04-27 00:57:38.495726] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.938 [2024-04-27 00:57:38.495898] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.938 [2024-04-27 00:57:38.495906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.938 [2024-04-27 00:57:38.495911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.938 [2024-04-27 00:57:38.498622] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.938 [2024-04-27 00:57:38.506858] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.938 [2024-04-27 00:57:38.507424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.507849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.507879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.938 [2024-04-27 00:57:38.507901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.938 [2024-04-27 00:57:38.508495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.938 [2024-04-27 00:57:38.508916] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.938 [2024-04-27 00:57:38.508924] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.938 [2024-04-27 00:57:38.508929] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.938 [2024-04-27 00:57:38.512944] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.938 [2024-04-27 00:57:38.520715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.938 [2024-04-27 00:57:38.521362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.521827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.521858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.938 [2024-04-27 00:57:38.521878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.938 [2024-04-27 00:57:38.522468] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.938 [2024-04-27 00:57:38.522676] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.938 [2024-04-27 00:57:38.522683] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.938 [2024-04-27 00:57:38.522689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.938 [2024-04-27 00:57:38.525406] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.938 [2024-04-27 00:57:38.533734] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.938 [2024-04-27 00:57:38.534392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.534771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.534800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.938 [2024-04-27 00:57:38.534822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.938 [2024-04-27 00:57:38.535406] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.938 [2024-04-27 00:57:38.535981] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.938 [2024-04-27 00:57:38.535990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.938 [2024-04-27 00:57:38.535996] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.938 [2024-04-27 00:57:38.538678] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.938 [2024-04-27 00:57:38.546670] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.938 [2024-04-27 00:57:38.547327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.547699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.547729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.938 [2024-04-27 00:57:38.547751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.938 [2024-04-27 00:57:38.548337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.938 [2024-04-27 00:57:38.548811] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.938 [2024-04-27 00:57:38.548819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.938 [2024-04-27 00:57:38.548825] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.938 [2024-04-27 00:57:38.551536] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.938 [2024-04-27 00:57:38.559603] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.938 [2024-04-27 00:57:38.560256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.560672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.938 [2024-04-27 00:57:38.560702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.938 [2024-04-27 00:57:38.560723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.938 [2024-04-27 00:57:38.561125] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.938 [2024-04-27 00:57:38.561297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.938 [2024-04-27 00:57:38.561305] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.938 [2024-04-27 00:57:38.561310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.938 [2024-04-27 00:57:38.564048] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.938 [2024-04-27 00:57:38.572568] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.938 [2024-04-27 00:57:38.573163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.939 [2024-04-27 00:57:38.573584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.939 [2024-04-27 00:57:38.573613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.939 [2024-04-27 00:57:38.573634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.939 [2024-04-27 00:57:38.574093] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.939 [2024-04-27 00:57:38.574266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.939 [2024-04-27 00:57:38.574274] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.939 [2024-04-27 00:57:38.574280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.939 [2024-04-27 00:57:38.576950] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.939 [2024-04-27 00:57:38.585431] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.939 [2024-04-27 00:57:38.586017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.939 [2024-04-27 00:57:38.586388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.939 [2024-04-27 00:57:38.586398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.939 [2024-04-27 00:57:38.586405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.939 [2024-04-27 00:57:38.586576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.939 [2024-04-27 00:57:38.586748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.939 [2024-04-27 00:57:38.586760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.939 [2024-04-27 00:57:38.586766] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.939 [2024-04-27 00:57:38.589439] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.939 [2024-04-27 00:57:38.598527] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.939 [2024-04-27 00:57:38.599189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.939 [2024-04-27 00:57:38.599563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.939 [2024-04-27 00:57:38.599592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.939 [2024-04-27 00:57:38.599614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.939 [2024-04-27 00:57:38.600174] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.939 [2024-04-27 00:57:38.600351] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.939 [2024-04-27 00:57:38.600370] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.939 [2024-04-27 00:57:38.600376] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.939 [2024-04-27 00:57:38.604292] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.939 [2024-04-27 00:57:38.612058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.939 [2024-04-27 00:57:38.612700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.939 [2024-04-27 00:57:38.613134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.939 [2024-04-27 00:57:38.613165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.939 [2024-04-27 00:57:38.613187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.939 [2024-04-27 00:57:38.613761] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.939 [2024-04-27 00:57:38.614170] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.939 [2024-04-27 00:57:38.614179] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.939 [2024-04-27 00:57:38.614184] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.939 [2024-04-27 00:57:38.616918] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:45.939 [2024-04-27 00:57:38.625004] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:45.939 [2024-04-27 00:57:38.625689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.939 [2024-04-27 00:57:38.626073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.939 [2024-04-27 00:57:38.626084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:45.939 [2024-04-27 00:57:38.626091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:45.939 [2024-04-27 00:57:38.626268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:45.939 [2024-04-27 00:57:38.626467] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:45.939 [2024-04-27 00:57:38.626483] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:45.939 [2024-04-27 00:57:38.626498] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:45.939 [2024-04-27 00:57:38.629370] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.199 [2024-04-27 00:57:38.638140] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.199 [2024-04-27 00:57:38.638657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.199 [2024-04-27 00:57:38.639100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.199 [2024-04-27 00:57:38.639133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.199 [2024-04-27 00:57:38.639156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.199 [2024-04-27 00:57:38.639397] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.199 [2024-04-27 00:57:38.639569] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.199 [2024-04-27 00:57:38.639577] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.200 [2024-04-27 00:57:38.639583] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.200 [2024-04-27 00:57:38.642332] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.200 [2024-04-27 00:57:38.651081] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.200 [2024-04-27 00:57:38.651656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.652119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.652129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.200 [2024-04-27 00:57:38.652136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.200 [2024-04-27 00:57:38.652308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.200 [2024-04-27 00:57:38.652479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.200 [2024-04-27 00:57:38.652487] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.200 [2024-04-27 00:57:38.652493] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.200 [2024-04-27 00:57:38.655160] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.200 [2024-04-27 00:57:38.663934] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.200 [2024-04-27 00:57:38.664510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.664953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.664982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.200 [2024-04-27 00:57:38.665004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.200 [2024-04-27 00:57:38.665595] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.200 [2024-04-27 00:57:38.666176] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.200 [2024-04-27 00:57:38.666185] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.200 [2024-04-27 00:57:38.666191] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.200 [2024-04-27 00:57:38.668924] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.200 [2024-04-27 00:57:38.676797] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.200 [2024-04-27 00:57:38.677445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.677821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.677851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.200 [2024-04-27 00:57:38.677872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.200 [2024-04-27 00:57:38.678148] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.200 [2024-04-27 00:57:38.678320] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.200 [2024-04-27 00:57:38.678328] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.200 [2024-04-27 00:57:38.678334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.200 [2024-04-27 00:57:38.681008] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.200 [2024-04-27 00:57:38.689751] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.200 [2024-04-27 00:57:38.690422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.690848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.690878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.200 [2024-04-27 00:57:38.690900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.200 [2024-04-27 00:57:38.691394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.200 [2024-04-27 00:57:38.691647] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.200 [2024-04-27 00:57:38.691657] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.200 [2024-04-27 00:57:38.691666] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.200 [2024-04-27 00:57:38.695715] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.200 [2024-04-27 00:57:38.703075] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.200 [2024-04-27 00:57:38.703706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.704202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.704234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.200 [2024-04-27 00:57:38.704256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.200 [2024-04-27 00:57:38.704830] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.200 [2024-04-27 00:57:38.705075] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.200 [2024-04-27 00:57:38.705083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.200 [2024-04-27 00:57:38.705089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.200 [2024-04-27 00:57:38.707840] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.200 [2024-04-27 00:57:38.715942] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.200 [2024-04-27 00:57:38.716513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.716913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.716942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.200 [2024-04-27 00:57:38.716963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.200 [2024-04-27 00:57:38.717179] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.200 [2024-04-27 00:57:38.717351] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.200 [2024-04-27 00:57:38.717359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.200 [2024-04-27 00:57:38.717365] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.200 [2024-04-27 00:57:38.720065] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.200 [2024-04-27 00:57:38.728978] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.200 [2024-04-27 00:57:38.729636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.730047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.730091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.200 [2024-04-27 00:57:38.730113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.200 [2024-04-27 00:57:38.730690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.200 [2024-04-27 00:57:38.731276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.200 [2024-04-27 00:57:38.731301] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.200 [2024-04-27 00:57:38.731320] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.200 [2024-04-27 00:57:38.735385] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.200 [2024-04-27 00:57:38.742815] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.200 [2024-04-27 00:57:38.743457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.743822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.743852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.200 [2024-04-27 00:57:38.743873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.200 [2024-04-27 00:57:38.744440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.200 [2024-04-27 00:57:38.744612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.200 [2024-04-27 00:57:38.744620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.200 [2024-04-27 00:57:38.744627] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.200 [2024-04-27 00:57:38.747469] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.200 [2024-04-27 00:57:38.755743] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.200 [2024-04-27 00:57:38.756395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.756914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.200 [2024-04-27 00:57:38.756944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.200 [2024-04-27 00:57:38.756966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.200 [2024-04-27 00:57:38.757516] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.200 [2024-04-27 00:57:38.757688] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.200 [2024-04-27 00:57:38.757696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.200 [2024-04-27 00:57:38.757702] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.200 [2024-04-27 00:57:38.760470] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.200 [2024-04-27 00:57:38.768698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.200 [2024-04-27 00:57:38.769254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.769655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.769665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.201 [2024-04-27 00:57:38.769672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.201 [2024-04-27 00:57:38.769843] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.201 [2024-04-27 00:57:38.770015] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.201 [2024-04-27 00:57:38.770022] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.201 [2024-04-27 00:57:38.770029] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.201 [2024-04-27 00:57:38.772770] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.201 [2024-04-27 00:57:38.781590] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.201 [2024-04-27 00:57:38.782182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.782659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.782689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.201 [2024-04-27 00:57:38.782710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.201 [2024-04-27 00:57:38.783277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.201 [2024-04-27 00:57:38.783449] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.201 [2024-04-27 00:57:38.783457] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.201 [2024-04-27 00:57:38.783463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.201 [2024-04-27 00:57:38.786166] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.201 [2024-04-27 00:57:38.794528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.201 [2024-04-27 00:57:38.795193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.795662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.795700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.201 [2024-04-27 00:57:38.795721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.201 [2024-04-27 00:57:38.796304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.201 [2024-04-27 00:57:38.796649] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.201 [2024-04-27 00:57:38.796657] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.201 [2024-04-27 00:57:38.796663] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.201 [2024-04-27 00:57:38.799334] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.201 [2024-04-27 00:57:38.807385] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.201 [2024-04-27 00:57:38.808009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.808411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.808443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.201 [2024-04-27 00:57:38.808464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.201 [2024-04-27 00:57:38.808696] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.201 [2024-04-27 00:57:38.808867] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.201 [2024-04-27 00:57:38.808875] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.201 [2024-04-27 00:57:38.808881] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.201 [2024-04-27 00:57:38.811550] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.201 [2024-04-27 00:57:38.820322] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.201 [2024-04-27 00:57:38.820997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.821492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.821524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.201 [2024-04-27 00:57:38.821546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.201 [2024-04-27 00:57:38.821806] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.201 [2024-04-27 00:57:38.821977] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.201 [2024-04-27 00:57:38.821985] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.201 [2024-04-27 00:57:38.821992] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.201 [2024-04-27 00:57:38.824710] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.201 [2024-04-27 00:57:38.833191] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.201 [2024-04-27 00:57:38.833844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.834357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.834392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.201 [2024-04-27 00:57:38.834402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.201 [2024-04-27 00:57:38.834574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.201 [2024-04-27 00:57:38.834746] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.201 [2024-04-27 00:57:38.834753] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.201 [2024-04-27 00:57:38.834759] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.201 [2024-04-27 00:57:38.837523] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.201 [2024-04-27 00:57:38.846164] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.201 [2024-04-27 00:57:38.846726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.847234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.847266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.201 [2024-04-27 00:57:38.847287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.201 [2024-04-27 00:57:38.847538] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.201 [2024-04-27 00:57:38.847709] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.201 [2024-04-27 00:57:38.847717] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.201 [2024-04-27 00:57:38.847723] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.201 [2024-04-27 00:57:38.850448] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.201 [2024-04-27 00:57:38.859034] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.201 [2024-04-27 00:57:38.859672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.860186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.860218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.201 [2024-04-27 00:57:38.860239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.201 [2024-04-27 00:57:38.860785] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.201 [2024-04-27 00:57:38.860957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.201 [2024-04-27 00:57:38.860965] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.201 [2024-04-27 00:57:38.860971] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.201 [2024-04-27 00:57:38.863637] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.201 [2024-04-27 00:57:38.871916] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.201 [2024-04-27 00:57:38.872537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.873051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.873093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.201 [2024-04-27 00:57:38.873116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.201 [2024-04-27 00:57:38.873705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.201 [2024-04-27 00:57:38.873877] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.201 [2024-04-27 00:57:38.873885] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.201 [2024-04-27 00:57:38.873890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.201 [2024-04-27 00:57:38.876607] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.201 [2024-04-27 00:57:38.885144] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.201 [2024-04-27 00:57:38.885820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.886336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.201 [2024-04-27 00:57:38.886370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.201 [2024-04-27 00:57:38.886391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.201 [2024-04-27 00:57:38.886657] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.202 [2024-04-27 00:57:38.886828] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.202 [2024-04-27 00:57:38.886836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.202 [2024-04-27 00:57:38.886842] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.202 [2024-04-27 00:57:38.889672] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.462 [2024-04-27 00:57:38.898068] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.462 [2024-04-27 00:57:38.898764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.899286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.899320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.462 [2024-04-27 00:57:38.899342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.462 [2024-04-27 00:57:38.899930] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.462 [2024-04-27 00:57:38.900158] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.462 [2024-04-27 00:57:38.900172] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.462 [2024-04-27 00:57:38.900179] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.462 [2024-04-27 00:57:38.902951] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.462 [2024-04-27 00:57:38.911027] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.462 [2024-04-27 00:57:38.911702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.912220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.912253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.462 [2024-04-27 00:57:38.912275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.462 [2024-04-27 00:57:38.912800] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.462 [2024-04-27 00:57:38.912977] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.462 [2024-04-27 00:57:38.912985] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.462 [2024-04-27 00:57:38.912991] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.462 [2024-04-27 00:57:38.915728] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.462 [2024-04-27 00:57:38.923806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.462 [2024-04-27 00:57:38.924428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.924925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.924955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.462 [2024-04-27 00:57:38.924977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.462 [2024-04-27 00:57:38.925530] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.462 [2024-04-27 00:57:38.925702] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.462 [2024-04-27 00:57:38.925711] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.462 [2024-04-27 00:57:38.925717] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.462 [2024-04-27 00:57:38.928453] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.462 [2024-04-27 00:57:38.936628] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.462 [2024-04-27 00:57:38.937296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.937792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.937822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.462 [2024-04-27 00:57:38.937844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.462 [2024-04-27 00:57:38.938434] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.462 [2024-04-27 00:57:38.938891] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.462 [2024-04-27 00:57:38.938899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.462 [2024-04-27 00:57:38.938905] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.462 [2024-04-27 00:57:38.941566] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.462 [2024-04-27 00:57:38.949415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.462 [2024-04-27 00:57:38.950040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.950561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.950592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.462 [2024-04-27 00:57:38.950614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.462 [2024-04-27 00:57:38.951204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.462 [2024-04-27 00:57:38.951783] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.462 [2024-04-27 00:57:38.951815] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.462 [2024-04-27 00:57:38.951835] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.462 [2024-04-27 00:57:38.954549] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.462 [2024-04-27 00:57:38.962290] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.462 [2024-04-27 00:57:38.962942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.963436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.963468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.462 [2024-04-27 00:57:38.963490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.462 [2024-04-27 00:57:38.963988] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.462 [2024-04-27 00:57:38.964244] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.462 [2024-04-27 00:57:38.964255] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.462 [2024-04-27 00:57:38.964264] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.462 [2024-04-27 00:57:38.968302] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.462 [2024-04-27 00:57:38.975766] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.462 [2024-04-27 00:57:38.976395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.976814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.976844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.462 [2024-04-27 00:57:38.976865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.462 [2024-04-27 00:57:38.977453] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.462 [2024-04-27 00:57:38.977936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.462 [2024-04-27 00:57:38.977944] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.462 [2024-04-27 00:57:38.977950] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.462 [2024-04-27 00:57:38.980660] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.462 [2024-04-27 00:57:38.988698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.462 [2024-04-27 00:57:38.989321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.989831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.462 [2024-04-27 00:57:38.989861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.462 [2024-04-27 00:57:38.989882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.462 [2024-04-27 00:57:38.990473] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.462 [2024-04-27 00:57:38.990955] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.462 [2024-04-27 00:57:38.990963] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.462 [2024-04-27 00:57:38.990972] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.463 [2024-04-27 00:57:38.993695] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.463 [2024-04-27 00:57:39.001664] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.463 [2024-04-27 00:57:39.002308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.002812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.002843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.463 [2024-04-27 00:57:39.002864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.463 [2024-04-27 00:57:39.003110] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.463 [2024-04-27 00:57:39.003282] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.463 [2024-04-27 00:57:39.003290] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.463 [2024-04-27 00:57:39.003296] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.463 [2024-04-27 00:57:39.005961] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.463 [2024-04-27 00:57:39.014434] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.463 [2024-04-27 00:57:39.015095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.015613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.015644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.463 [2024-04-27 00:57:39.015665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.463 [2024-04-27 00:57:39.016203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.463 [2024-04-27 00:57:39.016376] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.463 [2024-04-27 00:57:39.016384] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.463 [2024-04-27 00:57:39.016390] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.463 [2024-04-27 00:57:39.019058] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.463 [2024-04-27 00:57:39.027276] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.463 [2024-04-27 00:57:39.027918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.028409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.028441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.463 [2024-04-27 00:57:39.028463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.463 [2024-04-27 00:57:39.029028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.463 [2024-04-27 00:57:39.029204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.463 [2024-04-27 00:57:39.029212] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.463 [2024-04-27 00:57:39.029218] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.463 [2024-04-27 00:57:39.031881] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.463 [2024-04-27 00:57:39.040104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.463 [2024-04-27 00:57:39.040752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.041192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.041225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.463 [2024-04-27 00:57:39.041246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.463 [2024-04-27 00:57:39.041431] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.463 [2024-04-27 00:57:39.041603] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.463 [2024-04-27 00:57:39.041611] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.463 [2024-04-27 00:57:39.041616] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.463 [2024-04-27 00:57:39.044284] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.463 [2024-04-27 00:57:39.052993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.463 [2024-04-27 00:57:39.053643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.054158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.054189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.463 [2024-04-27 00:57:39.054211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.463 [2024-04-27 00:57:39.054395] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.463 [2024-04-27 00:57:39.054566] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.463 [2024-04-27 00:57:39.054575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.463 [2024-04-27 00:57:39.054580] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.463 [2024-04-27 00:57:39.057249] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.463 [2024-04-27 00:57:39.065902] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.463 [2024-04-27 00:57:39.066559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.067085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.067119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.463 [2024-04-27 00:57:39.067125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.463 [2024-04-27 00:57:39.067297] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.463 [2024-04-27 00:57:39.067469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.463 [2024-04-27 00:57:39.067477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.463 [2024-04-27 00:57:39.067483] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.463 [2024-04-27 00:57:39.070152] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.463 [2024-04-27 00:57:39.078770] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.463 [2024-04-27 00:57:39.079418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.079854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.079884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.463 [2024-04-27 00:57:39.079916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.463 [2024-04-27 00:57:39.080092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.463 [2024-04-27 00:57:39.080264] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.463 [2024-04-27 00:57:39.080272] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.463 [2024-04-27 00:57:39.080278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.463 [2024-04-27 00:57:39.082948] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.463 [2024-04-27 00:57:39.091638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.463 [2024-04-27 00:57:39.092241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.092705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.092735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.463 [2024-04-27 00:57:39.092756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.463 [2024-04-27 00:57:39.093026] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.463 [2024-04-27 00:57:39.093215] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.463 [2024-04-27 00:57:39.093224] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.463 [2024-04-27 00:57:39.093230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.463 [2024-04-27 00:57:39.095896] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.463 [2024-04-27 00:57:39.104531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.463 [2024-04-27 00:57:39.105169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.105646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.105676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.463 [2024-04-27 00:57:39.105697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.463 [2024-04-27 00:57:39.106136] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.463 [2024-04-27 00:57:39.106309] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.463 [2024-04-27 00:57:39.106317] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.463 [2024-04-27 00:57:39.106322] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.463 [2024-04-27 00:57:39.108985] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.463 [2024-04-27 00:57:39.117523] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.463 [2024-04-27 00:57:39.118189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.118611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.463 [2024-04-27 00:57:39.118642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.464 [2024-04-27 00:57:39.118664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.464 [2024-04-27 00:57:39.119250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.464 [2024-04-27 00:57:39.119538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.464 [2024-04-27 00:57:39.119547] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.464 [2024-04-27 00:57:39.119553] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.464 [2024-04-27 00:57:39.122244] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.464 [2024-04-27 00:57:39.130523] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.464 [2024-04-27 00:57:39.131107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.464 [2024-04-27 00:57:39.131510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.464 [2024-04-27 00:57:39.131520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.464 [2024-04-27 00:57:39.131527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.464 [2024-04-27 00:57:39.131698] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.464 [2024-04-27 00:57:39.131868] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.464 [2024-04-27 00:57:39.131878] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.464 [2024-04-27 00:57:39.131885] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.464 [2024-04-27 00:57:39.134772] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.464 [2024-04-27 00:57:39.143532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.464 [2024-04-27 00:57:39.144171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.464 [2024-04-27 00:57:39.144585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.464 [2024-04-27 00:57:39.144616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.464 [2024-04-27 00:57:39.144637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.464 [2024-04-27 00:57:39.145186] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.464 [2024-04-27 00:57:39.145370] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.464 [2024-04-27 00:57:39.145378] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.464 [2024-04-27 00:57:39.145384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.464 [2024-04-27 00:57:39.148031] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.724 [2024-04-27 00:57:39.156784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.724 [2024-04-27 00:57:39.157370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.157807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.157848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.724 [2024-04-27 00:57:39.157871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.724 [2024-04-27 00:57:39.158363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.724 [2024-04-27 00:57:39.158536] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.724 [2024-04-27 00:57:39.158544] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.724 [2024-04-27 00:57:39.158550] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.724 [2024-04-27 00:57:39.161346] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.724 [2024-04-27 00:57:39.169803] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.724 [2024-04-27 00:57:39.170425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.170852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.170883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.724 [2024-04-27 00:57:39.170905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.724 [2024-04-27 00:57:39.171498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.724 [2024-04-27 00:57:39.171723] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.724 [2024-04-27 00:57:39.171731] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.724 [2024-04-27 00:57:39.171737] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.724 [2024-04-27 00:57:39.174425] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.724 [2024-04-27 00:57:39.182795] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.724 [2024-04-27 00:57:39.183437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.183878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.183887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.724 [2024-04-27 00:57:39.183907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.724 [2024-04-27 00:57:39.184484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.724 [2024-04-27 00:57:39.184657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.724 [2024-04-27 00:57:39.184664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.724 [2024-04-27 00:57:39.184670] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.724 [2024-04-27 00:57:39.187387] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.724 [2024-04-27 00:57:39.195710] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.724 [2024-04-27 00:57:39.196346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.196730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.196760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.724 [2024-04-27 00:57:39.196789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.724 [2024-04-27 00:57:39.197296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.724 [2024-04-27 00:57:39.197469] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.724 [2024-04-27 00:57:39.197477] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.724 [2024-04-27 00:57:39.197483] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.724 [2024-04-27 00:57:39.200154] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.724 [2024-04-27 00:57:39.208569] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.724 [2024-04-27 00:57:39.209236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.209684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.209714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.724 [2024-04-27 00:57:39.209736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.724 [2024-04-27 00:57:39.210220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.724 [2024-04-27 00:57:39.210393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.724 [2024-04-27 00:57:39.210401] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.724 [2024-04-27 00:57:39.210407] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.724 [2024-04-27 00:57:39.213074] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.724 [2024-04-27 00:57:39.221401] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.724 [2024-04-27 00:57:39.222066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.222575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.222606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.724 [2024-04-27 00:57:39.222628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.724 [2024-04-27 00:57:39.223219] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.724 [2024-04-27 00:57:39.223567] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.724 [2024-04-27 00:57:39.223575] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.724 [2024-04-27 00:57:39.223581] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.724 [2024-04-27 00:57:39.226253] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.724 [2024-04-27 00:57:39.234302] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.724 [2024-04-27 00:57:39.234964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.235286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.235297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.724 [2024-04-27 00:57:39.235304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.724 [2024-04-27 00:57:39.235479] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.724 [2024-04-27 00:57:39.235651] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.724 [2024-04-27 00:57:39.235659] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.724 [2024-04-27 00:57:39.235665] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.724 [2024-04-27 00:57:39.238296] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.724 [2024-04-27 00:57:39.247200] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.724 [2024-04-27 00:57:39.247845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.248282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.248315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.724 [2024-04-27 00:57:39.248336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.724 [2024-04-27 00:57:39.248912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.724 [2024-04-27 00:57:39.249498] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.724 [2024-04-27 00:57:39.249523] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.724 [2024-04-27 00:57:39.249542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.724 [2024-04-27 00:57:39.252170] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.724 [2024-04-27 00:57:39.260136] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.724 [2024-04-27 00:57:39.260803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.261293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.724 [2024-04-27 00:57:39.261325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.724 [2024-04-27 00:57:39.261346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.724 [2024-04-27 00:57:39.261810] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.724 [2024-04-27 00:57:39.261981] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.724 [2024-04-27 00:57:39.261989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.725 [2024-04-27 00:57:39.261995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.725 [2024-04-27 00:57:39.264664] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.725 [2024-04-27 00:57:39.272950] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.725 [2024-04-27 00:57:39.273621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.274040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.274085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.725 [2024-04-27 00:57:39.274108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.725 [2024-04-27 00:57:39.274685] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.725 [2024-04-27 00:57:39.274893] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.725 [2024-04-27 00:57:39.274901] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.725 [2024-04-27 00:57:39.274906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.725 [2024-04-27 00:57:39.277580] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.725 [2024-04-27 00:57:39.285775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.725 [2024-04-27 00:57:39.286372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.286864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.286894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.725 [2024-04-27 00:57:39.286914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.725 [2024-04-27 00:57:39.287381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.725 [2024-04-27 00:57:39.287553] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.725 [2024-04-27 00:57:39.287560] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.725 [2024-04-27 00:57:39.287566] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.725 [2024-04-27 00:57:39.290289] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.725 [2024-04-27 00:57:39.298656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.725 [2024-04-27 00:57:39.299327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.299743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.299773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.725 [2024-04-27 00:57:39.299795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.725 [2024-04-27 00:57:39.300021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.725 [2024-04-27 00:57:39.300197] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.725 [2024-04-27 00:57:39.300207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.725 [2024-04-27 00:57:39.300213] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.725 [2024-04-27 00:57:39.302957] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.725 [2024-04-27 00:57:39.311502] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.725 [2024-04-27 00:57:39.312065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.312538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.312569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.725 [2024-04-27 00:57:39.312590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.725 [2024-04-27 00:57:39.313181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.725 [2024-04-27 00:57:39.313769] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.725 [2024-04-27 00:57:39.313780] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.725 [2024-04-27 00:57:39.313786] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.725 [2024-04-27 00:57:39.316453] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.725 [2024-04-27 00:57:39.324369] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.725 [2024-04-27 00:57:39.325003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.325374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.325406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.725 [2024-04-27 00:57:39.325428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.725 [2024-04-27 00:57:39.325711] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.725 [2024-04-27 00:57:39.325883] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.725 [2024-04-27 00:57:39.325890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.725 [2024-04-27 00:57:39.325896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.725 [2024-04-27 00:57:39.328569] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.725 [2024-04-27 00:57:39.337150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.725 [2024-04-27 00:57:39.337765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.338186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.338198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.725 [2024-04-27 00:57:39.338205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.725 [2024-04-27 00:57:39.338377] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.725 [2024-04-27 00:57:39.338549] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.725 [2024-04-27 00:57:39.338556] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.725 [2024-04-27 00:57:39.338562] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.725 [2024-04-27 00:57:39.341212] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.725 [2024-04-27 00:57:39.349950] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.725 [2024-04-27 00:57:39.350541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.350943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.350974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.725 [2024-04-27 00:57:39.350995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.725 [2024-04-27 00:57:39.351195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.725 [2024-04-27 00:57:39.351386] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.725 [2024-04-27 00:57:39.351394] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.725 [2024-04-27 00:57:39.351404] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.725 [2024-04-27 00:57:39.354107] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.725 [2024-04-27 00:57:39.362783] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.725 [2024-04-27 00:57:39.363417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.363830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.363860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.725 [2024-04-27 00:57:39.363881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.725 [2024-04-27 00:57:39.364081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.725 [2024-04-27 00:57:39.364268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.725 [2024-04-27 00:57:39.364276] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.725 [2024-04-27 00:57:39.364282] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.725 [2024-04-27 00:57:39.366953] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.725 [2024-04-27 00:57:39.375791] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.725 [2024-04-27 00:57:39.376434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.376924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.376954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.725 [2024-04-27 00:57:39.376974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.725 [2024-04-27 00:57:39.377565] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.725 [2024-04-27 00:57:39.378086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.725 [2024-04-27 00:57:39.378094] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.725 [2024-04-27 00:57:39.378101] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.725 [2024-04-27 00:57:39.380766] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.725 [2024-04-27 00:57:39.388862] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.725 [2024-04-27 00:57:39.389519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.725 [2024-04-27 00:57:39.389899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.726 [2024-04-27 00:57:39.389928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.726 [2024-04-27 00:57:39.389949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.726 [2024-04-27 00:57:39.390536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.726 [2024-04-27 00:57:39.390980] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.726 [2024-04-27 00:57:39.390988] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.726 [2024-04-27 00:57:39.390995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.726 [2024-04-27 00:57:39.393710] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.726 [2024-04-27 00:57:39.401714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.726 [2024-04-27 00:57:39.402392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.726 [2024-04-27 00:57:39.402817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.726 [2024-04-27 00:57:39.402847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.726 [2024-04-27 00:57:39.402869] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.726 [2024-04-27 00:57:39.403223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.726 [2024-04-27 00:57:39.403395] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.726 [2024-04-27 00:57:39.403403] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.726 [2024-04-27 00:57:39.403410] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.726 [2024-04-27 00:57:39.406151] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.726 [2024-04-27 00:57:39.414755] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.726 [2024-04-27 00:57:39.415447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.726 [2024-04-27 00:57:39.415871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.726 [2024-04-27 00:57:39.415902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.726 [2024-04-27 00:57:39.415924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.726 [2024-04-27 00:57:39.416380] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.726 [2024-04-27 00:57:39.416585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.726 [2024-04-27 00:57:39.416595] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.726 [2024-04-27 00:57:39.416602] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.986 [2024-04-27 00:57:39.419542] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.986 [2024-04-27 00:57:39.427816] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.986 [2024-04-27 00:57:39.428404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.986 [2024-04-27 00:57:39.428804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.986 [2024-04-27 00:57:39.428835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.986 [2024-04-27 00:57:39.428858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.986 [2024-04-27 00:57:39.429461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.986 [2024-04-27 00:57:39.429715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.986 [2024-04-27 00:57:39.429726] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.986 [2024-04-27 00:57:39.429735] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.986 [2024-04-27 00:57:39.433783] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.986 [2024-04-27 00:57:39.441586] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.986 [2024-04-27 00:57:39.442256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.986 [2024-04-27 00:57:39.442615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.986 [2024-04-27 00:57:39.442646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.986 [2024-04-27 00:57:39.442668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.986 [2024-04-27 00:57:39.442954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.986 [2024-04-27 00:57:39.443131] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.986 [2024-04-27 00:57:39.443139] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.986 [2024-04-27 00:57:39.443145] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.986 [2024-04-27 00:57:39.445868] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.986 [2024-04-27 00:57:39.454560] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.986 [2024-04-27 00:57:39.455104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.986 [2024-04-27 00:57:39.455578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.986 [2024-04-27 00:57:39.455608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.986 [2024-04-27 00:57:39.455630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.986 [2024-04-27 00:57:39.456192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.986 [2024-04-27 00:57:39.456365] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.986 [2024-04-27 00:57:39.456372] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.986 [2024-04-27 00:57:39.456378] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.986 [2024-04-27 00:57:39.459045] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.986 [2024-04-27 00:57:39.467432] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.986 [2024-04-27 00:57:39.468096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.986 [2024-04-27 00:57:39.468549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.986 [2024-04-27 00:57:39.468579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.987 [2024-04-27 00:57:39.468600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.987 [2024-04-27 00:57:39.469053] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.987 [2024-04-27 00:57:39.469244] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.987 [2024-04-27 00:57:39.469253] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.987 [2024-04-27 00:57:39.469258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.987 [2024-04-27 00:57:39.471926] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.987 [2024-04-27 00:57:39.480305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.987 [2024-04-27 00:57:39.480954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.481424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.481457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.987 [2024-04-27 00:57:39.481480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.987 [2024-04-27 00:57:39.481998] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.987 [2024-04-27 00:57:39.482174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.987 [2024-04-27 00:57:39.482182] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.987 [2024-04-27 00:57:39.482188] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.987 [2024-04-27 00:57:39.484855] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.987 [2024-04-27 00:57:39.493190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.987 [2024-04-27 00:57:39.493836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.494092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.494124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.987 [2024-04-27 00:57:39.494146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.987 [2024-04-27 00:57:39.494393] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.987 [2024-04-27 00:57:39.494564] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.987 [2024-04-27 00:57:39.494572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.987 [2024-04-27 00:57:39.494578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.987 [2024-04-27 00:57:39.497252] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.987 [2024-04-27 00:57:39.506064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.987 [2024-04-27 00:57:39.506593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.507022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.507052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.987 [2024-04-27 00:57:39.507092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.987 [2024-04-27 00:57:39.507418] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.987 [2024-04-27 00:57:39.507590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.987 [2024-04-27 00:57:39.507598] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.987 [2024-04-27 00:57:39.507604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.987 [2024-04-27 00:57:39.510271] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.987 [2024-04-27 00:57:39.518852] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.987 [2024-04-27 00:57:39.519473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.519904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.519941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.987 [2024-04-27 00:57:39.519968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.987 [2024-04-27 00:57:39.520151] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.987 [2024-04-27 00:57:39.520353] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.987 [2024-04-27 00:57:39.520364] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.987 [2024-04-27 00:57:39.520373] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.987 [2024-04-27 00:57:39.524412] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.987 [2024-04-27 00:57:39.532282] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.987 [2024-04-27 00:57:39.532963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.533426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.533458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.987 [2024-04-27 00:57:39.533480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.987 [2024-04-27 00:57:39.534055] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.987 [2024-04-27 00:57:39.534267] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.987 [2024-04-27 00:57:39.534275] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.987 [2024-04-27 00:57:39.534281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.987 [2024-04-27 00:57:39.536984] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.987 [2024-04-27 00:57:39.545196] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.987 [2024-04-27 00:57:39.545838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.546293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.546325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.987 [2024-04-27 00:57:39.546347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.987 [2024-04-27 00:57:39.546923] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.987 [2024-04-27 00:57:39.547412] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.987 [2024-04-27 00:57:39.547420] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.987 [2024-04-27 00:57:39.547426] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.987 [2024-04-27 00:57:39.550005] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.987 [2024-04-27 00:57:39.558064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.987 [2024-04-27 00:57:39.558738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.559225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.559257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.987 [2024-04-27 00:57:39.559285] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.987 [2024-04-27 00:57:39.559861] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.987 [2024-04-27 00:57:39.560390] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.987 [2024-04-27 00:57:39.560402] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.987 [2024-04-27 00:57:39.560411] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.987 [2024-04-27 00:57:39.564457] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.987 [2024-04-27 00:57:39.571988] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.987 [2024-04-27 00:57:39.572669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.573148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.573187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.987 [2024-04-27 00:57:39.573195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.987 [2024-04-27 00:57:39.573366] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.987 [2024-04-27 00:57:39.573538] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.987 [2024-04-27 00:57:39.573546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.987 [2024-04-27 00:57:39.573552] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.987 [2024-04-27 00:57:39.576246] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.987 [2024-04-27 00:57:39.584817] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.987 [2024-04-27 00:57:39.585467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.585889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.987 [2024-04-27 00:57:39.585919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.987 [2024-04-27 00:57:39.585942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.987 [2024-04-27 00:57:39.586240] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.987 [2024-04-27 00:57:39.586414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.988 [2024-04-27 00:57:39.586422] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.988 [2024-04-27 00:57:39.586428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.988 [2024-04-27 00:57:39.589160] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1809068 Killed "${NVMF_APP[@]}" "$@" 00:23:46.988 00:57:39 -- host/bdevperf.sh@36 -- # tgt_init 00:23:46.988 [2024-04-27 00:57:39.597806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.988 00:57:39 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:46.988 00:57:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:46.988 [2024-04-27 00:57:39.598465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 00:57:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:46.988 [2024-04-27 00:57:39.598891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.598902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.988 [2024-04-27 00:57:39.598909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.988 00:57:39 -- common/autotest_common.sh@10 -- # set +x 00:23:46.988 [2024-04-27 00:57:39.599086] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.988 [2024-04-27 00:57:39.599280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.988 [2024-04-27 00:57:39.599288] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.988 [2024-04-27 00:57:39.599294] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.988 [2024-04-27 00:57:39.602114] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.988 00:57:39 -- nvmf/common.sh@470 -- # nvmfpid=1810487 00:23:46.988 00:57:39 -- nvmf/common.sh@471 -- # waitforlisten 1810487 00:23:46.988 00:57:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:46.988 00:57:39 -- common/autotest_common.sh@817 -- # '[' -z 1810487 ']' 00:23:46.988 00:57:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.988 00:57:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:46.988 00:57:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.988 00:57:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:46.988 00:57:39 -- common/autotest_common.sh@10 -- # set +x 00:23:46.988 [2024-04-27 00:57:39.610922] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.988 [2024-04-27 00:57:39.611583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.612027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.612037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.988 [2024-04-27 00:57:39.612044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.988 [2024-04-27 00:57:39.612225] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.988 [2024-04-27 00:57:39.612401] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.988 [2024-04-27 00:57:39.612409] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.988 [2024-04-27 00:57:39.612415] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.988 [2024-04-27 00:57:39.615234] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.988 [2024-04-27 00:57:39.624040] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.988 [2024-04-27 00:57:39.624697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.625075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.625086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.988 [2024-04-27 00:57:39.625093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.988 [2024-04-27 00:57:39.625270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.988 [2024-04-27 00:57:39.625450] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.988 [2024-04-27 00:57:39.625458] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.988 [2024-04-27 00:57:39.625464] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.988 [2024-04-27 00:57:39.628284] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.988 [2024-04-27 00:57:39.637106] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.988 [2024-04-27 00:57:39.637781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.638228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.638239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.988 [2024-04-27 00:57:39.638246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.988 [2024-04-27 00:57:39.638423] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.988 [2024-04-27 00:57:39.638599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.988 [2024-04-27 00:57:39.638607] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.988 [2024-04-27 00:57:39.638614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.988 [2024-04-27 00:57:39.641447] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.988 [2024-04-27 00:57:39.649656] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:46.988 [2024-04-27 00:57:39.649693] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.988 [2024-04-27 00:57:39.650257] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.988 [2024-04-27 00:57:39.650908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.651355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.651365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.988 [2024-04-27 00:57:39.651373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.988 [2024-04-27 00:57:39.651551] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.988 [2024-04-27 00:57:39.651727] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.988 [2024-04-27 00:57:39.651736] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.988 [2024-04-27 00:57:39.651743] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.988 [2024-04-27 00:57:39.654561] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.988 [2024-04-27 00:57:39.663387] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.988 [2024-04-27 00:57:39.664025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.664471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.664482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.988 [2024-04-27 00:57:39.664489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.988 [2024-04-27 00:57:39.664669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.988 [2024-04-27 00:57:39.664846] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.988 [2024-04-27 00:57:39.664854] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.988 [2024-04-27 00:57:39.664861] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:46.988 [2024-04-27 00:57:39.667652] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:46.988 [2024-04-27 00:57:39.676460] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:46.988 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.988 [2024-04-27 00:57:39.677108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.677536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.988 [2024-04-27 00:57:39.677547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:46.988 [2024-04-27 00:57:39.677554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:46.988 [2024-04-27 00:57:39.677733] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:46.988 [2024-04-27 00:57:39.677911] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:46.988 [2024-04-27 00:57:39.677920] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:46.988 [2024-04-27 00:57:39.677926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.249 [2024-04-27 00:57:39.680864] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.249 [2024-04-27 00:57:39.689636] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.249 [2024-04-27 00:57:39.690306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-04-27 00:57:39.690692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-04-27 00:57:39.690702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.249 [2024-04-27 00:57:39.690710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.249 [2024-04-27 00:57:39.690888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.249 [2024-04-27 00:57:39.691066] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.249 [2024-04-27 00:57:39.691080] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.249 [2024-04-27 00:57:39.691088] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.249 [2024-04-27 00:57:39.693904] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.249 [2024-04-27 00:57:39.702687] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.249 [2024-04-27 00:57:39.703338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-04-27 00:57:39.703713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-04-27 00:57:39.703723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.249 [2024-04-27 00:57:39.703731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.249 [2024-04-27 00:57:39.703911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.249 [2024-04-27 00:57:39.704093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.249 [2024-04-27 00:57:39.704101] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.249 [2024-04-27 00:57:39.704108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.249 [2024-04-27 00:57:39.706889] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.249 [2024-04-27 00:57:39.707420] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:47.249 [2024-04-27 00:57:39.715801] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.249 [2024-04-27 00:57:39.716441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-04-27 00:57:39.716893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-04-27 00:57:39.716904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.249 [2024-04-27 00:57:39.716912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.249 [2024-04-27 00:57:39.717095] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.249 [2024-04-27 00:57:39.717274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.249 [2024-04-27 00:57:39.717283] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.249 [2024-04-27 00:57:39.717290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.249 [2024-04-27 00:57:39.720125] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.249 [2024-04-27 00:57:39.728850] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.249 [2024-04-27 00:57:39.729527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-04-27 00:57:39.729971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-04-27 00:57:39.729981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.249 [2024-04-27 00:57:39.729989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.249 [2024-04-27 00:57:39.730170] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.249 [2024-04-27 00:57:39.730347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.249 [2024-04-27 00:57:39.730356] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.249 [2024-04-27 00:57:39.730362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.249 [2024-04-27 00:57:39.733178] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.249 [2024-04-27 00:57:39.741869] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.249 [2024-04-27 00:57:39.742538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-04-27 00:57:39.742958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.249 [2024-04-27 00:57:39.742968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.249 [2024-04-27 00:57:39.742975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.249 [2024-04-27 00:57:39.743158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.249 [2024-04-27 00:57:39.743339] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.249 [2024-04-27 00:57:39.743347] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.250 [2024-04-27 00:57:39.743353] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.250 [2024-04-27 00:57:39.746130] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.250 [2024-04-27 00:57:39.754923] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.250 [2024-04-27 00:57:39.755549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.756001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.756011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.250 [2024-04-27 00:57:39.756019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.250 [2024-04-27 00:57:39.756203] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.250 [2024-04-27 00:57:39.756381] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.250 [2024-04-27 00:57:39.756389] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.250 [2024-04-27 00:57:39.756397] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.250 [2024-04-27 00:57:39.759182] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.250 [2024-04-27 00:57:39.768110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.250 [2024-04-27 00:57:39.768703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.769159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.769171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.250 [2024-04-27 00:57:39.769180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.250 [2024-04-27 00:57:39.769364] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.250 [2024-04-27 00:57:39.769537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.250 [2024-04-27 00:57:39.769545] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.250 [2024-04-27 00:57:39.769552] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.250 [2024-04-27 00:57:39.772402] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.250 [2024-04-27 00:57:39.781229] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.250 [2024-04-27 00:57:39.781857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.782305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.782316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.250 [2024-04-27 00:57:39.782324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.250 [2024-04-27 00:57:39.782501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.250 [2024-04-27 00:57:39.782678] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.250 [2024-04-27 00:57:39.782689] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.250 [2024-04-27 00:57:39.782696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.250 [2024-04-27 00:57:39.785519] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.250 [2024-04-27 00:57:39.787155] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.250 [2024-04-27 00:57:39.787181] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.250 [2024-04-27 00:57:39.787189] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.250 [2024-04-27 00:57:39.787195] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.250 [2024-04-27 00:57:39.787200] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.250 [2024-04-27 00:57:39.787236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.250 [2024-04-27 00:57:39.787362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.250 [2024-04-27 00:57:39.787364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.250 [2024-04-27 00:57:39.794346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.250 [2024-04-27 00:57:39.795020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.795445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.795457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.250 [2024-04-27 00:57:39.795465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.250 [2024-04-27 00:57:39.795644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.250 [2024-04-27 00:57:39.795823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.250 [2024-04-27 00:57:39.795832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.250 [2024-04-27 00:57:39.795839] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.250 [2024-04-27 00:57:39.798659] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.250 [2024-04-27 00:57:39.807488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.250 [2024-04-27 00:57:39.808132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.808515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.808526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.250 [2024-04-27 00:57:39.808535] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.250 [2024-04-27 00:57:39.808712] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.250 [2024-04-27 00:57:39.808891] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.250 [2024-04-27 00:57:39.808899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.250 [2024-04-27 00:57:39.808907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.250 [2024-04-27 00:57:39.811729] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.250 [2024-04-27 00:57:39.820700] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.250 [2024-04-27 00:57:39.821400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.821844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.821855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.250 [2024-04-27 00:57:39.821863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.250 [2024-04-27 00:57:39.822042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.250 [2024-04-27 00:57:39.822227] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.250 [2024-04-27 00:57:39.822236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.250 [2024-04-27 00:57:39.822243] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.250 [2024-04-27 00:57:39.825062] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.250 [2024-04-27 00:57:39.833889] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.250 [2024-04-27 00:57:39.834486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.834855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.834865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.250 [2024-04-27 00:57:39.834873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.250 [2024-04-27 00:57:39.835052] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.250 [2024-04-27 00:57:39.835236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.250 [2024-04-27 00:57:39.835245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.250 [2024-04-27 00:57:39.835252] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.250 [2024-04-27 00:57:39.838077] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.250 [2024-04-27 00:57:39.847076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.250 [2024-04-27 00:57:39.847659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.847978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.847989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.250 [2024-04-27 00:57:39.847997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.250 [2024-04-27 00:57:39.848181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.250 [2024-04-27 00:57:39.848359] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.250 [2024-04-27 00:57:39.848367] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.250 [2024-04-27 00:57:39.848374] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.250 [2024-04-27 00:57:39.851197] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.250 [2024-04-27 00:57:39.860193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.250 [2024-04-27 00:57:39.860821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.861205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.250 [2024-04-27 00:57:39.861216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.251 [2024-04-27 00:57:39.861223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.251 [2024-04-27 00:57:39.861402] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.251 [2024-04-27 00:57:39.861580] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.251 [2024-04-27 00:57:39.861588] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.251 [2024-04-27 00:57:39.861595] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.251 [2024-04-27 00:57:39.864416] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.251 [2024-04-27 00:57:39.873236] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.251 [2024-04-27 00:57:39.873860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.874243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.874255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.251 [2024-04-27 00:57:39.874262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.251 [2024-04-27 00:57:39.874439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.251 [2024-04-27 00:57:39.874617] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.251 [2024-04-27 00:57:39.874625] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.251 [2024-04-27 00:57:39.874631] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.251 [2024-04-27 00:57:39.877450] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.251 [2024-04-27 00:57:39.886292] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.251 [2024-04-27 00:57:39.886858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.887188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.887200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.251 [2024-04-27 00:57:39.887208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.251 [2024-04-27 00:57:39.887386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.251 [2024-04-27 00:57:39.887563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.251 [2024-04-27 00:57:39.887571] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.251 [2024-04-27 00:57:39.887578] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.251 [2024-04-27 00:57:39.890401] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.251 [2024-04-27 00:57:39.899388] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.251 [2024-04-27 00:57:39.899995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.900273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.900284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.251 [2024-04-27 00:57:39.900295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.251 [2024-04-27 00:57:39.900472] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.251 [2024-04-27 00:57:39.900648] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.251 [2024-04-27 00:57:39.900656] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.251 [2024-04-27 00:57:39.900662] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.251 [2024-04-27 00:57:39.903483] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.251 [2024-04-27 00:57:39.912469] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.251 [2024-04-27 00:57:39.913049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.913452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.913463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.251 [2024-04-27 00:57:39.913470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.251 [2024-04-27 00:57:39.913647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.251 [2024-04-27 00:57:39.913824] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.251 [2024-04-27 00:57:39.913832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.251 [2024-04-27 00:57:39.913838] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.251 [2024-04-27 00:57:39.916661] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.251 [2024-04-27 00:57:39.925651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.251 [2024-04-27 00:57:39.926048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.926380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.926391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.251 [2024-04-27 00:57:39.926398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.251 [2024-04-27 00:57:39.926574] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.251 [2024-04-27 00:57:39.926752] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.251 [2024-04-27 00:57:39.926760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.251 [2024-04-27 00:57:39.926766] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.251 [2024-04-27 00:57:39.929587] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.251 [2024-04-27 00:57:39.938738] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.251 [2024-04-27 00:57:39.939256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.939669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.251 [2024-04-27 00:57:39.939681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.251 [2024-04-27 00:57:39.939689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.251 [2024-04-27 00:57:39.939891] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.251 [2024-04-27 00:57:39.940068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.251 [2024-04-27 00:57:39.940090] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.251 [2024-04-27 00:57:39.940097] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.251 [2024-04-27 00:57:39.942932] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.522 [2024-04-27 00:57:39.951920] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:39.952491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:39.952815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:39.952825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:39.952833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:39.953009] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:39.953193] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:39.953202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:39.953209] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:39.956063] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:39.965058] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:39.965581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:39.965900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:39.965910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:39.965917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:39.966099] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:39.966277] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:39.966285] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:39.966291] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:39.969113] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:39.978098] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:39.978700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:39.979027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:39.979037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:39.979044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:39.979226] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:39.979407] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:39.979415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:39.979421] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:39.982241] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:39.991223] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:39.991770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:39.992139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:39.992150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:39.992157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:39.992334] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:39.992512] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:39.992520] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:39.992526] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:39.995351] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.004346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:40.004905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.005244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.005255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:40.005262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:40.005439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:40.005616] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:40.005624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:40.005631] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:40.008886] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.017396] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:40.017901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.018239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.018251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:40.018258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:40.018436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:40.018613] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:40.018625] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:40.018632] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:40.021458] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.030459] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:40.031090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.031260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.031269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:40.031276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:40.031453] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:40.031631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:40.031639] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:40.031645] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:40.034491] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.043647] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:40.044309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.044639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.044650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:40.044657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:40.044834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:40.045010] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:40.045018] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:40.045024] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:40.047845] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.056828] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:40.057404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.057737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.057747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:40.057754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:40.057931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:40.058111] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:40.058119] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:40.058130] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:40.060947] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.069939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:40.070551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.070875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.070885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:40.070892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:40.071069] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:40.071252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:40.071260] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:40.071266] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:40.074086] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.083320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:40.083841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.084264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.084275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:40.084282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:40.084460] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:40.084637] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:40.084645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:40.084651] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:40.087487] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.096418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:40.096970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.097416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.097430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:40.097437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:40.097616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:40.097793] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:40.097802] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:40.097808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:40.100655] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.109529] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:40.110166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.110547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.110557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:40.110565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:40.110744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:40.110921] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:40.110929] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:40.110936] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:40.113756] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.122582] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.523 [2024-04-27 00:57:40.123190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.123571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.523 [2024-04-27 00:57:40.123581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.523 [2024-04-27 00:57:40.123588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.523 [2024-04-27 00:57:40.123765] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.523 [2024-04-27 00:57:40.123942] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.523 [2024-04-27 00:57:40.123950] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.523 [2024-04-27 00:57:40.123956] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.523 [2024-04-27 00:57:40.126774] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.523 [2024-04-27 00:57:40.135764] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.524 [2024-04-27 00:57:40.136322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.136647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.136658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.524 [2024-04-27 00:57:40.136665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.524 [2024-04-27 00:57:40.136841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.524 [2024-04-27 00:57:40.137017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.524 [2024-04-27 00:57:40.137025] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.524 [2024-04-27 00:57:40.137031] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.524 [2024-04-27 00:57:40.139849] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.524 [2024-04-27 00:57:40.148847] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.524 [2024-04-27 00:57:40.149521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.149851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.149861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.524 [2024-04-27 00:57:40.149867] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.524 [2024-04-27 00:57:40.150044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.524 [2024-04-27 00:57:40.150227] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.524 [2024-04-27 00:57:40.150236] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.524 [2024-04-27 00:57:40.150242] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.524 [2024-04-27 00:57:40.153060] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.524 [2024-04-27 00:57:40.161886] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.524 [2024-04-27 00:57:40.162603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.162926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.162937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.524 [2024-04-27 00:57:40.162944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.524 [2024-04-27 00:57:40.163158] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.524 [2024-04-27 00:57:40.163338] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.524 [2024-04-27 00:57:40.163346] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.524 [2024-04-27 00:57:40.163352] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.524 [2024-04-27 00:57:40.166174] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.524 [2024-04-27 00:57:40.174993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.524 [2024-04-27 00:57:40.175564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.176136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.176148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.524 [2024-04-27 00:57:40.176155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.524 [2024-04-27 00:57:40.176333] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.524 [2024-04-27 00:57:40.176510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.524 [2024-04-27 00:57:40.176518] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.524 [2024-04-27 00:57:40.176525] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.524 [2024-04-27 00:57:40.179346] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.524 [2024-04-27 00:57:40.188159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.524 [2024-04-27 00:57:40.188670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.189052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.189062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.524 [2024-04-27 00:57:40.189074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.524 [2024-04-27 00:57:40.189251] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.524 [2024-04-27 00:57:40.189428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.524 [2024-04-27 00:57:40.189436] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.524 [2024-04-27 00:57:40.189442] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.524 [2024-04-27 00:57:40.192268] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.524 [2024-04-27 00:57:40.201247] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.524 [2024-04-27 00:57:40.201806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.202194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.524 [2024-04-27 00:57:40.202207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.524 [2024-04-27 00:57:40.202216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.524 [2024-04-27 00:57:40.202426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.524 [2024-04-27 00:57:40.202615] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.524 [2024-04-27 00:57:40.202623] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.524 [2024-04-27 00:57:40.202630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.524 [2024-04-27 00:57:40.205476] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.787 [2024-04-27 00:57:40.214286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.787 [2024-04-27 00:57:40.214909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.787 [2024-04-27 00:57:40.215308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.787 [2024-04-27 00:57:40.215321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.787 [2024-04-27 00:57:40.215329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.787 [2024-04-27 00:57:40.215509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.787 [2024-04-27 00:57:40.215687] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.787 [2024-04-27 00:57:40.215695] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.787 [2024-04-27 00:57:40.215701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.787 [2024-04-27 00:57:40.218521] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.787 [2024-04-27 00:57:40.227404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.787 [2024-04-27 00:57:40.227980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.787 [2024-04-27 00:57:40.228384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.787 [2024-04-27 00:57:40.228395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.787 [2024-04-27 00:57:40.228402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.787 [2024-04-27 00:57:40.228580] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.787 [2024-04-27 00:57:40.228757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.787 [2024-04-27 00:57:40.228765] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.787 [2024-04-27 00:57:40.228771] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.787 [2024-04-27 00:57:40.231592] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.787 [2024-04-27 00:57:40.240575] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.787 [2024-04-27 00:57:40.241203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.787 [2024-04-27 00:57:40.241603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.787 [2024-04-27 00:57:40.241613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.787 [2024-04-27 00:57:40.241620] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.787 [2024-04-27 00:57:40.241797] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.787 [2024-04-27 00:57:40.241973] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.787 [2024-04-27 00:57:40.241981] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.787 [2024-04-27 00:57:40.241987] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.787 [2024-04-27 00:57:40.244804] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.787 [2024-04-27 00:57:40.253616] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.787 [2024-04-27 00:57:40.254255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.787 [2024-04-27 00:57:40.254656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.787 [2024-04-27 00:57:40.254666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.787 [2024-04-27 00:57:40.254673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.787 [2024-04-27 00:57:40.254850] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.787 [2024-04-27 00:57:40.255026] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.787 [2024-04-27 00:57:40.255034] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.787 [2024-04-27 00:57:40.255040] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.787 [2024-04-27 00:57:40.257859] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.787 [2024-04-27 00:57:40.266662] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.787 [2024-04-27 00:57:40.267222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.787 [2024-04-27 00:57:40.267644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.787 [2024-04-27 00:57:40.267654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.267664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.267841] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.268017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.268025] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.268032] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.270847] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.279818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.280470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.280916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.280926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.280933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.281115] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.281293] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.281302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.281308] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.284126] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.292934] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.293586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.293971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.293982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.293989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.294170] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.294347] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.294355] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.294362] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.297180] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.305989] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.306644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.307080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.307090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.307098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.307279] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.307456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.307465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.307471] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.310289] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.319104] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.319755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.320203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.320215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.320223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.320400] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.320577] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.320585] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.320592] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.323412] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.332222] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.332840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.333229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.333241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.333248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.333426] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.333603] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.333611] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.333617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.336433] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.345251] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.345902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.346353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.346365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.346372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.346550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.346730] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.346739] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.346745] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.349564] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.358374] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.359019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.359455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.359466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.359473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.359649] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.359826] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.359834] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.359840] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.362657] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.371486] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.372078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.372525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.372535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.372542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.372719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.372896] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.372904] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.372910] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.375728] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.384536] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.385182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.385627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.385637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.385644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.385820] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.385999] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.386011] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.386017] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.388838] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.397651] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.398297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.398670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.398681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.398688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.398865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.399041] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.399049] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.399056] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.401877] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.410702] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.411281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.411706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.411716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.411724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.411901] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.412084] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.412093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.412100] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.414913] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.423724] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.424356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.424729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.424739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.424746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.424924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.425105] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.425113] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.425123] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.427937] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.436744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.437374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.437746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.437756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.437763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.437940] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.438121] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.438129] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.438135] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.440961] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.449776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.450252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.450674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.450685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.450692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.450868] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.451045] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.451052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.451059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.453878] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.462854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.463493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.463883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.463893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.463900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.464080] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.464257] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.464265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.464272] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 00:57:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:47.788 00:57:40 -- common/autotest_common.sh@850 -- # return 0 00:23:47.788 00:57:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:47.788 00:57:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:47.788 00:57:40 -- common/autotest_common.sh@10 -- # set +x 00:23:47.788 [2024-04-27 00:57:40.467096] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:47.788 [2024-04-27 00:57:40.475912] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:47.788 [2024-04-27 00:57:40.476484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.476908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.788 [2024-04-27 00:57:40.476918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:47.788 [2024-04-27 00:57:40.476925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:47.788 [2024-04-27 00:57:40.477106] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:47.788 [2024-04-27 00:57:40.477284] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:47.788 [2024-04-27 00:57:40.477292] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:47.788 [2024-04-27 00:57:40.477300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:47.788 [2024-04-27 00:57:40.480202] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.048 [2024-04-27 00:57:40.489119] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.048 [2024-04-27 00:57:40.489734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.048 [2024-04-27 00:57:40.489889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.048 [2024-04-27 00:57:40.489900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:48.048 [2024-04-27 00:57:40.489907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:48.048 [2024-04-27 00:57:40.490089] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:48.048 [2024-04-27 00:57:40.490285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.048 [2024-04-27 00:57:40.490293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.048 [2024-04-27 00:57:40.490300] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.048 [2024-04-27 00:57:40.493135] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.048 00:57:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.048 00:57:40 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:48.048 00:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.048 00:57:40 -- common/autotest_common.sh@10 -- # set +x 00:23:48.048 [2024-04-27 00:57:40.502285] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.048 [2024-04-27 00:57:40.502810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.048 [2024-04-27 00:57:40.503235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.048 [2024-04-27 00:57:40.503248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:48.048 [2024-04-27 00:57:40.503255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:48.048 [2024-04-27 00:57:40.503436] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:48.048 [2024-04-27 00:57:40.503612] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.048 [2024-04-27 00:57:40.503620] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.048 [2024-04-27 00:57:40.503626] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.048 [2024-04-27 00:57:40.506441] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.048 [2024-04-27 00:57:40.507016] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.048 00:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.048 00:57:40 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:48.048 00:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.048 00:57:40 -- common/autotest_common.sh@10 -- # set +x 00:23:48.048 [2024-04-27 00:57:40.515435] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.048 [2024-04-27 00:57:40.516066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.048 [2024-04-27 00:57:40.516395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.048 [2024-04-27 00:57:40.516405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:48.048 [2024-04-27 00:57:40.516412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:48.048 [2024-04-27 00:57:40.516588] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:48.048 [2024-04-27 00:57:40.516765] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.048 [2024-04-27 00:57:40.516773] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.048 [2024-04-27 00:57:40.516779] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.049 [2024-04-27 00:57:40.519600] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.049 [2024-04-27 00:57:40.528582] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.049 [2024-04-27 00:57:40.529193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.049 [2024-04-27 00:57:40.529664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.049 [2024-04-27 00:57:40.529674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:48.049 [2024-04-27 00:57:40.529681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:48.049 [2024-04-27 00:57:40.529857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:48.049 [2024-04-27 00:57:40.530034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.049 [2024-04-27 00:57:40.530042] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.049 [2024-04-27 00:57:40.530049] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.049 [2024-04-27 00:57:40.532869] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.049 [2024-04-27 00:57:40.541720] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.049 [2024-04-27 00:57:40.542353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.049 [2024-04-27 00:57:40.542797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.049 [2024-04-27 00:57:40.542808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:48.049 [2024-04-27 00:57:40.542818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:48.049 [2024-04-27 00:57:40.542996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:48.049 [2024-04-27 00:57:40.543178] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.049 [2024-04-27 00:57:40.543187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.049 [2024-04-27 00:57:40.543193] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.049 [2024-04-27 00:57:40.546011] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.049 Malloc0 00:23:48.049 00:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.049 00:57:40 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:48.049 00:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.049 00:57:40 -- common/autotest_common.sh@10 -- # set +x 00:23:48.049 [2024-04-27 00:57:40.554821] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.049 [2024-04-27 00:57:40.555459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.049 [2024-04-27 00:57:40.555797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.049 [2024-04-27 00:57:40.555807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:48.049 [2024-04-27 00:57:40.555814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:48.049 [2024-04-27 00:57:40.555991] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:48.049 [2024-04-27 00:57:40.556171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.049 [2024-04-27 00:57:40.556180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.049 [2024-04-27 00:57:40.556187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.049 [2024-04-27 00:57:40.558998] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.049 00:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.049 00:57:40 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:48.049 00:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.049 00:57:40 -- common/autotest_common.sh@10 -- # set +x 00:23:48.049 [2024-04-27 00:57:40.567976] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.049 [2024-04-27 00:57:40.568624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.049 [2024-04-27 00:57:40.569068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.049 [2024-04-27 00:57:40.569083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ef19b0 with addr=10.0.0.2, port=4420 00:23:48.049 [2024-04-27 00:57:40.569090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ef19b0 is same with the state(5) to be set 00:23:48.049 [2024-04-27 00:57:40.569267] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef19b0 (9): Bad file descriptor 00:23:48.049 [2024-04-27 00:57:40.569444] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:48.049 [2024-04-27 00:57:40.569452] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:48.049 [2024-04-27 00:57:40.569458] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:48.049 00:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.049 00:57:40 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.049 00:57:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:48.049 00:57:40 -- common/autotest_common.sh@10 -- # set +x 00:23:48.049 [2024-04-27 00:57:40.572279] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:48.049 [2024-04-27 00:57:40.574221] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.049 00:57:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:48.049 00:57:40 -- host/bdevperf.sh@38 -- # wait 1809555 00:23:48.049 [2024-04-27 00:57:40.581124] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:48.049 [2024-04-27 00:57:40.705498] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:58.023 00:23:58.023 Latency(us) 00:23:58.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.023 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:58.023 Verification LBA range: start 0x0 length 0x4000 00:23:58.023 Nvme1n1 : 15.01 7903.87 30.87 12027.73 0.00 6399.99 1560.04 28835.84 00:23:58.023 =================================================================================================================== 00:23:58.023 Total : 7903.87 30.87 12027.73 0.00 6399.99 1560.04 28835.84 00:23:58.023 00:57:49 -- host/bdevperf.sh@39 -- # sync 00:23:58.023 00:57:49 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:58.023 00:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.023 00:57:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.023 00:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.023 00:57:49 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:23:58.023 00:57:49 -- host/bdevperf.sh@44 -- # nvmftestfini 00:23:58.023 00:57:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:58.023 00:57:49 -- nvmf/common.sh@117 -- # sync 00:23:58.023 00:57:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:58.023 00:57:49 -- nvmf/common.sh@120 -- # set +e 00:23:58.023 00:57:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:58.023 00:57:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:58.023 rmmod nvme_tcp 00:23:58.023 rmmod nvme_fabrics 00:23:58.023 rmmod nvme_keyring 00:23:58.023 00:57:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:58.023 00:57:49 -- nvmf/common.sh@124 -- # set -e 00:23:58.023 00:57:49 -- nvmf/common.sh@125 -- # return 0 00:23:58.023 00:57:49 -- nvmf/common.sh@478 -- # '[' -n 1810487 ']' 00:23:58.023 00:57:49 -- nvmf/common.sh@479 -- # killprocess 1810487 00:23:58.023 00:57:49 -- common/autotest_common.sh@936 -- # '[' -z 1810487 ']' 00:23:58.023 00:57:49 -- common/autotest_common.sh@940 -- # kill -0 1810487 00:23:58.023 00:57:49 -- common/autotest_common.sh@941 -- # uname 00:23:58.023 00:57:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:58.023 00:57:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1810487 00:23:58.023 00:57:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:58.023 00:57:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:58.023 00:57:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1810487' 00:23:58.023 killing process with pid 1810487 00:23:58.023 00:57:49 -- common/autotest_common.sh@955 -- # kill 1810487 00:23:58.023 00:57:49 -- common/autotest_common.sh@960 -- # wait 1810487 00:23:58.023 00:57:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:58.023 00:57:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:58.023 00:57:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:58.023 00:57:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:58.023 00:57:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:58.023 00:57:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.023 00:57:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.023 00:57:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.957 00:57:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.957 00:23:58.957 real 0m26.094s 00:23:58.957 user 1m3.098s 00:23:58.957 sys 0m6.116s 00:23:58.957 00:57:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:58.957 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.957 ************************************ 00:23:58.957 END TEST nvmf_bdevperf 00:23:58.957 ************************************ 00:23:59.216 00:57:51 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:23:59.216 00:57:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:59.216 00:57:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:59.216 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.216 ************************************ 00:23:59.216 START TEST nvmf_target_disconnect 00:23:59.216 ************************************ 00:23:59.216 00:57:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:23:59.216 * Looking for test storage... 00:23:59.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.216 00:57:51 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.216 00:57:51 -- nvmf/common.sh@7 -- # uname -s 00:23:59.216 00:57:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.216 00:57:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.216 00:57:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.216 00:57:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.216 00:57:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.216 00:57:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.216 00:57:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.216 00:57:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.216 00:57:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.216 00:57:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.216 00:57:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:59.216 00:57:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:59.216 00:57:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.216 00:57:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.216 00:57:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.216 00:57:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.216 00:57:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.216 00:57:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.216 00:57:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.216 00:57:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.216 00:57:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.216 00:57:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.216 00:57:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.216 00:57:51 -- paths/export.sh@5 -- # export PATH 00:23:59.216 00:57:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.216 00:57:51 -- nvmf/common.sh@47 -- # : 0 00:23:59.216 00:57:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.216 00:57:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.216 00:57:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.216 00:57:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.216 00:57:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.216 00:57:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.216 00:57:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.216 00:57:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.216 00:57:51 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:59.216 00:57:51 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:23:59.216 00:57:51 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:59.475 00:57:51 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:23:59.475 00:57:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:59.475 00:57:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.475 00:57:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:59.475 00:57:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:59.475 00:57:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:59.475 00:57:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.475 00:57:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.475 00:57:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.475 00:57:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:59.475 00:57:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:59.475 00:57:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.475 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:24:04.769 00:57:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:04.769 00:57:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:04.769 00:57:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:04.769 00:57:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:04.769 00:57:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:04.769 00:57:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:04.769 00:57:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:04.769 00:57:56 -- nvmf/common.sh@295 -- # net_devs=() 00:24:04.769 00:57:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:04.769 00:57:56 -- nvmf/common.sh@296 -- # e810=() 00:24:04.769 00:57:56 -- nvmf/common.sh@296 -- # local -ga e810 00:24:04.769 00:57:56 -- nvmf/common.sh@297 -- # x722=() 00:24:04.769 00:57:56 -- nvmf/common.sh@297 -- # local -ga x722 00:24:04.769 00:57:56 -- nvmf/common.sh@298 -- # mlx=() 00:24:04.769 00:57:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:04.769 00:57:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.769 00:57:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:04.769 00:57:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:04.769 00:57:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:04.769 00:57:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:04.769 00:57:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:04.769 00:57:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:04.769 00:57:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:04.769 00:57:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:04.769 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:04.769 00:57:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:04.769 00:57:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:04.770 00:57:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:04.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:04.770 00:57:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:04.770 00:57:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:04.770 00:57:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.770 00:57:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:04.770 00:57:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.770 00:57:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:04.770 Found net devices under 0000:86:00.0: cvl_0_0 00:24:04.770 00:57:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.770 00:57:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:04.770 00:57:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.770 00:57:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:04.770 00:57:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.770 00:57:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:04.770 Found net devices under 0000:86:00.1: cvl_0_1 00:24:04.770 00:57:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.770 00:57:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:04.770 00:57:56 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:04.770 00:57:56 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:04.770 00:57:56 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:04.770 00:57:56 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.770 00:57:56 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.770 00:57:56 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.770 00:57:56 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:04.770 00:57:56 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.770 00:57:56 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.770 00:57:56 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:04.770 00:57:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.770 00:57:56 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.770 00:57:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:04.770 00:57:56 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:04.770 00:57:56 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.770 00:57:56 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.770 00:57:56 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.770 00:57:56 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.770 00:57:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:04.770 00:57:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.770 00:57:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.770 00:57:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.770 00:57:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:04.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:24:04.770 00:24:04.770 --- 10.0.0.2 ping statistics --- 00:24:04.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.770 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:24:04.770 00:57:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:24:04.770 00:24:04.770 --- 10.0.0.1 ping statistics --- 00:24:04.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.770 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:24:04.770 00:57:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.770 00:57:57 -- nvmf/common.sh@411 -- # return 0 00:24:04.770 00:57:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:04.770 00:57:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.770 00:57:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:04.770 00:57:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:04.770 00:57:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.770 00:57:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:04.770 00:57:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:04.770 00:57:57 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:04.770 00:57:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:04.770 00:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:04.770 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:04.770 ************************************ 00:24:04.770 START TEST nvmf_target_disconnect_tc1 00:24:04.770 ************************************ 00:24:04.770 00:57:57 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:24:04.770 00:57:57 -- host/target_disconnect.sh@32 -- # set +e 00:24:04.770 00:57:57 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:04.770 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.770 [2024-04-27 00:57:57.399837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.770 [2024-04-27 00:57:57.400324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:04.770 [2024-04-27 00:57:57.400337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fcbb0 with addr=10.0.0.2, port=4420 00:24:04.770 [2024-04-27 00:57:57.400362] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:04.770 [2024-04-27 00:57:57.400374] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:04.770 [2024-04-27 00:57:57.400381] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:04.770 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:04.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:04.770 Initializing NVMe Controllers 00:24:04.770 00:57:57 -- host/target_disconnect.sh@33 -- # trap - ERR 00:24:04.770 00:57:57 -- host/target_disconnect.sh@33 -- # print_backtrace 00:24:04.770 00:57:57 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:24:04.770 00:57:57 -- common/autotest_common.sh@1139 -- # return 0 00:24:04.770 00:57:57 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:24:04.770 00:57:57 -- host/target_disconnect.sh@41 -- # set -e 00:24:04.770 00:24:04.770 real 0m0.091s 00:24:04.770 user 0m0.043s 00:24:04.770 sys 0m0.047s 00:24:04.770 00:57:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:04.770 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:04.770 ************************************ 00:24:04.770 END TEST nvmf_target_disconnect_tc1 00:24:04.770 ************************************ 00:24:04.770 00:57:57 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:04.770 00:57:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:04.770 00:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:04.770 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:05.030 ************************************ 00:24:05.030 START TEST nvmf_target_disconnect_tc2 00:24:05.030 ************************************ 00:24:05.030 00:57:57 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:24:05.030 00:57:57 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:24:05.030 00:57:57 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:05.030 00:57:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:05.030 00:57:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:05.030 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:05.030 00:57:57 -- nvmf/common.sh@470 -- # nvmfpid=1815654 00:24:05.030 00:57:57 -- nvmf/common.sh@471 -- # waitforlisten 1815654 00:24:05.030 00:57:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:05.030 00:57:57 -- common/autotest_common.sh@817 -- # '[' -z 1815654 ']' 00:24:05.030 00:57:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.030 00:57:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:05.030 00:57:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.030 00:57:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:05.030 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:05.030 [2024-04-27 00:57:57.647291] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:24:05.030 [2024-04-27 00:57:57.647342] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.030 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.030 [2024-04-27 00:57:57.716806] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.289 [2024-04-27 00:57:57.794698] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.289 [2024-04-27 00:57:57.794734] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.289 [2024-04-27 00:57:57.794740] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.289 [2024-04-27 00:57:57.794746] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.289 [2024-04-27 00:57:57.794751] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.289 [2024-04-27 00:57:57.794863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:05.289 [2024-04-27 00:57:57.794970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:05.289 [2024-04-27 00:57:57.794995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:05.289 [2024-04-27 00:57:57.794996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:05.856 00:57:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:05.856 00:57:58 -- common/autotest_common.sh@850 -- # return 0 00:24:05.857 00:57:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:05.857 00:57:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:05.857 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:05.857 00:57:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.857 00:57:58 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:05.857 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.857 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:05.857 Malloc0 00:24:05.857 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.857 00:57:58 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:05.857 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.857 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:05.857 [2024-04-27 00:57:58.522769] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.857 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.857 00:57:58 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:05.857 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.857 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:05.857 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.857 00:57:58 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:05.857 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.857 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:05.857 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.857 00:57:58 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.857 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.857 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:05.857 [2024-04-27 00:57:58.547858] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.116 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.116 00:57:58 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:06.116 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.116 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.116 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.116 00:57:58 -- host/target_disconnect.sh@50 -- # reconnectpid=1815878 00:24:06.116 00:57:58 -- host/target_disconnect.sh@52 -- # sleep 2 00:24:06.116 00:57:58 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:06.116 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.025 00:58:00 -- host/target_disconnect.sh@53 -- # kill -9 1815654 00:24:08.025 00:58:00 -- host/target_disconnect.sh@55 -- # sleep 2 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 [2024-04-27 00:58:00.574991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Read completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.025 Write completed with error (sct=0, sc=8) 00:24:08.025 starting I/O failed 00:24:08.026 [2024-04-27 00:58:00.575207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Read completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 Write completed with error (sct=0, sc=8) 00:24:08.026 starting I/O failed 00:24:08.026 [2024-04-27 00:58:00.575415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.026 [2024-04-27 00:58:00.575725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.576032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.576045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.576366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.576818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.576848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.577267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.577736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.577766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.578167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.578579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.578608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.579049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.579488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.579518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.579938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.580349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.580387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.580775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.581105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.581118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.581532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.582022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.582052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.582468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.582924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.582954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.583350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.583830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.583859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.584248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.584735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.584764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.585202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.585586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.585615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.585932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.586394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.586425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.586844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.587262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.587292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.587699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.588154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.588184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.588599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.588992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.589022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.589319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.589728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.589742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.590194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.590568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.590582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.591052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.591449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.591464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.591671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.591981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.026 [2024-04-27 00:58:00.591995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.026 qpair failed and we were unable to recover it. 00:24:08.026 [2024-04-27 00:58:00.592422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.592738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.592752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.593138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.593570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.593585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.593941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.594286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.594317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.594724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.595181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.595212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.595626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.596103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.596133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.596538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.597032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.597046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.597358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.597777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.597806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.598175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.598643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.598679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.599123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.599434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.599448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.599820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.600120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.600134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.600463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.600955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.600984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.601440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.601778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.601807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.602157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.602501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.602537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.602951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.603351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.603381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.603782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.604129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.604159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.604566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.604971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.605000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.605437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.605797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.605827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.606331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.606641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.606655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.607030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.607420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.607435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.607747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.608157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.608187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.608624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.609037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.609067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.609487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.609870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.609901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.610314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.610730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.610765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.611230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.611705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.611735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.612204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.612656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.612685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.613091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.613568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.613598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.613997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.614403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.614433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.027 [2024-04-27 00:58:00.614918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.615394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.027 [2024-04-27 00:58:00.615424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.027 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.615879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.616211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.616241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.616699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.617093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.617123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.617555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.618029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.618058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.618467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.618864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.618893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.619356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.619699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.619738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.620144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.620628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.620657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.621078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.621539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.621568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.621976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.622318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.622348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.622715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.623177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.623207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.623616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.624015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.624045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.624479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.624880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.624895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.625276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.625601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.625630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.626049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.626476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.626505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.626895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.627323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.627354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.627823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.628301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.628336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.628820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.629226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.629256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.629660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.630057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.630095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.630468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.630923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.630952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.631177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.631582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.631611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.632086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.632478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.632506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.632923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.633321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.633351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.633810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.634178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.634194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.634555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.634956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.634986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.635389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.635786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.635816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.636235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.636688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.636718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.637203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.637520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.637549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.638031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.638519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.638548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.639008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.639460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.639490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.028 qpair failed and we were unable to recover it. 00:24:08.028 [2024-04-27 00:58:00.639833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.028 [2024-04-27 00:58:00.640260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.640290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.640746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.641090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.641105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.641464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.641795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.641824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.642221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.642661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.642691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.643044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.643460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.643491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.643912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.644331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.644362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.644766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.645162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.645177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.645630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.645867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.645896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.646303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.646679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.646707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.647132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.647621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.647651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.648052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.648516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.648546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.649028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.649444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.649474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.649881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.650331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.650362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.650701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.651176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.651206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.651615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.651932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.651946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.652356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.652699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.652729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.653091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.653588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.653617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.654103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.654493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.654523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.654850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.655310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.655339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.655800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.656271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.656301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.656772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.657221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.657251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.657677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.658156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.658187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.658617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.659018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.659048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.659472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.659882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.659912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.660363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.660819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.660848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.661259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.661662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.661691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.662024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.662438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.662468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.662958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.663412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.663443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.029 qpair failed and we were unable to recover it. 00:24:08.029 [2024-04-27 00:58:00.663871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.664295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.029 [2024-04-27 00:58:00.664325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.664803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.665183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.665198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.665462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.665851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.665880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.666357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.666781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.666810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.667293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.667693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.667722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.668176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.668632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.668646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.668969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.669462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.669492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.669950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.670346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.670375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.670719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.671208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.671239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.671700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.672165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.672195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.672592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.672985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.673015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.673501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.673910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.673939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.674346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.674747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.674776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.675252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.675648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.675678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.676013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.676415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.676445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.676918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.677252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.677280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.677698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.678106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.678136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.678623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.679026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.679064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.679440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.679837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.679867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.680350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.680804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.680834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.681271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.681722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.681752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.682164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.682614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.682644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.682991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.683381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.683412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.683886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.684273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.030 [2024-04-27 00:58:00.684304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.030 qpair failed and we were unable to recover it. 00:24:08.030 [2024-04-27 00:58:00.684760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.685232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.685263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.685618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.685973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.686002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.686399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.686800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.686828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.687259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.687732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.687761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.688106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.688515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.688544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.688958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.689413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.689443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.689837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.690248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.690264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.690641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.691065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.691102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.691519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.691924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.691952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.692417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.692871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.692900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.693355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.693740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.693769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.694119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.694503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.694533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.694933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.695386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.695416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.695818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.696220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.696251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.696662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.697097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.697127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.697559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.697966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.697997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.698407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.698827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.698856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.699274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.699624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.699654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.700064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.700540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.700554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.700934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.701407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.701438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.701844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.702277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.702307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.702792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.703194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.703210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.703636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.704000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.704015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.704243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.704617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.704646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.705057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.705520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.705550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.705959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.706442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.706473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.031 qpair failed and we were unable to recover it. 00:24:08.031 [2024-04-27 00:58:00.706801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.707203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.031 [2024-04-27 00:58:00.707233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.707710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.708052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.708090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.708400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.708823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.708852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.709274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.709751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.709789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.710179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.710599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.710629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.711032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.711460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.711475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.711922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.712373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.712403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.712764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.713213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.713243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.713738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.714090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.714120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.714574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.715033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.715062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.715482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.715713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.032 [2024-04-27 00:58:00.715742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.032 qpair failed and we were unable to recover it. 00:24:08.032 [2024-04-27 00:58:00.716162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.716533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.716547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.300 qpair failed and we were unable to recover it. 00:24:08.300 [2024-04-27 00:58:00.716935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.717246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.717260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.300 qpair failed and we were unable to recover it. 00:24:08.300 [2024-04-27 00:58:00.717633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.717990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.718004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.300 qpair failed and we were unable to recover it. 00:24:08.300 [2024-04-27 00:58:00.718460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.718865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.718894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.300 qpair failed and we were unable to recover it. 00:24:08.300 [2024-04-27 00:58:00.719286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.719699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.719729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.300 qpair failed and we were unable to recover it. 00:24:08.300 [2024-04-27 00:58:00.720138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.720536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.720566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.300 qpair failed and we were unable to recover it. 00:24:08.300 [2024-04-27 00:58:00.720988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.721385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.721415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.300 qpair failed and we were unable to recover it. 00:24:08.300 [2024-04-27 00:58:00.721779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.722126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.300 [2024-04-27 00:58:00.722157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.300 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.722591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.723063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.723100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.723565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.723897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.723926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.724396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.724873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.724913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.725236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.725617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.725646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.726105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.726558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.726587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.727001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.727401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.727431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.727836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.728237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.728267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.728747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.729195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.729209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.729660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.730112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.730160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.730608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.730992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.731007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.731426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.731799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.731813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.732136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.732503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.732532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.733010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.733357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.733387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.733844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.734184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.734215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.734618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.734956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.734986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.735336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.735734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.735763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.736173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.736569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.736599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.737079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.737444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.737459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.737903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.738354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.738385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.738777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.739194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.739225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.739481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.739848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.739882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.740279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.740727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.740756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.741210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.741591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.741620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.742021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.742413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.742428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.742855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.743235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.743266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.743666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.743894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.743923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.744269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.744666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.744696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.745039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.745435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.745450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.301 qpair failed and we were unable to recover it. 00:24:08.301 [2024-04-27 00:58:00.745816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.301 [2024-04-27 00:58:00.746108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.746138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.746449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.746878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.746908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.747244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.747720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.747769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.748146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.748558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.748588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.748988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.749379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.749408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.749752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.750239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.750284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.750717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.751100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.751130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.751590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.752063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.752101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.752564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.752748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.752777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.753169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.753611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.753641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.754056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.754449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.754479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.754821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.755271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.755301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.755780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.756234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.756269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.756665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.757010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.757039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.757453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.757789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.757818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.758311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.758712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.758741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.758926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.759254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.759285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.759719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.760192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.760222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.760593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.761017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.761046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.761463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.761918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.761932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.762293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.762702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.762731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.763126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.763543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.763572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.763964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.764380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.764416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.764827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.765221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.765235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.765656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.765983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.766013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.766364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.766839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.766868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.767353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.767754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.767783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.768192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.768669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.768699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.769205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.769658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.302 [2024-04-27 00:58:00.769687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.302 qpair failed and we were unable to recover it. 00:24:08.302 [2024-04-27 00:58:00.770169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.770566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.770595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.771026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.771429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.771459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.771919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.772370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.772400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.772818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.773267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.773297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.773700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.774096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.774127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.774487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.774896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.774925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.775406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.775826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.775856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.776313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.776667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.776681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.777125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.777578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.777608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.778066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.778410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.778439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.778855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.779309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.779339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.779736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.780127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.780141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.780590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.781041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.781088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.781597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.781954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.781983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.782446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.782897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.782926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.783410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.783837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.783866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.784348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.784772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.784801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.785286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.785679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.785709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.786058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.786521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.786551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.786963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.787186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.787200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.787681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.788017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.788045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.788534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.788888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.788918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.789313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.789786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.789815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.790265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.790714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.790743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.791233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.791488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.791518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.792003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.792338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.792353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.792713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.793166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.793196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.793587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.793934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.793964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.303 qpair failed and we were unable to recover it. 00:24:08.303 [2024-04-27 00:58:00.794358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.303 [2024-04-27 00:58:00.794681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.794711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.795197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.795678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.795708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.796113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.796523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.796552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.797030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.797399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.797430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.797831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.798292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.798322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.798779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.799176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.799206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.799568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.800066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.800114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.800507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.800981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.801010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.801513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.801751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.801781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.802204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.802676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.802706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.803106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.803578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.803607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.804038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.804393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.804423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.804878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.805360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.805390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.805858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.806245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.806275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.806674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.807144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.807173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.807512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.807928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.807957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.808362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.808771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.808801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.809278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.809736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.809766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.810166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.810592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.810622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.811103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.811577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.811606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.812016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.812493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.812523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.812934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.813333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.813363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.813826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.814212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.814243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.814665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.815155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.815185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.815665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.816133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.816162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.816635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.817119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.817150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.817504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.817979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.818009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.818479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.818819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.818848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.819327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.819756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.304 [2024-04-27 00:58:00.819771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.304 qpair failed and we were unable to recover it. 00:24:08.304 [2024-04-27 00:58:00.820126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.820493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.820521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.820701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.821176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.821207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.821636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.822036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.822065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.822533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.822933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.822962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.823441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.823894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.823924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.824263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.824608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.824623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.825049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.825532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.825561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.826172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.826560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.826590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.827086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.827490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.827504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.827884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.828270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.828300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.828625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.829103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.829133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.829473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.829862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.829876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.830247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.830557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.830572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.830967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.831315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.831345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.831748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.832151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.832181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.832603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.833058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.833095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.833546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.833931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.833961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.834368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.834934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.834949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.835353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.835749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.835778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.836201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.836607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.836621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.836918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.837222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.837237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.837627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.838044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.838058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.305 qpair failed and we were unable to recover it. 00:24:08.305 [2024-04-27 00:58:00.838432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.305 [2024-04-27 00:58:00.838795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.838810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.839261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.839659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.839689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.840174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.840578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.840608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.841112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.841455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.841484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.841883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.842351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.842381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.842863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.843287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.843317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.843650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.844044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.844082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.844477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.844901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.844915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.845340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.845770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.845800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.846202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.846555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.846585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.846932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.847399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.847429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.847826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.848157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.848187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.848671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.849037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.849067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.849684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.850333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.850357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.850746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.851172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.851202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.851536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.851917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.851947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.852280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.852673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.852703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.853106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.853587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.853616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.854015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.854402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.854432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.854926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.855311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.855326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.855754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.856157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.856187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.856607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.857060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.857099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.857443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.857850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.857880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.858290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.858686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.858700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.858901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.859262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.859292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.859712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.859967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.859996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.860489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.860957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.860972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.861365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.861738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.861768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.306 [2024-04-27 00:58:00.862250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.862480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.306 [2024-04-27 00:58:00.862495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.306 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.862930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.863382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.863397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.863777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.864177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.864207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.864614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.864956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.864986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.865469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.865867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.865896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.866294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.866756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.866786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.867226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.867623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.867653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.868109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.868565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.868600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.869017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.869361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.869392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.869790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.870130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.870161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.870572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.870976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.871006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.871658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.872061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.872099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.872520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.872915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.872944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.873352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.873749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.873778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.874260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.874656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.874670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.875045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.875355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.875370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.875763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.876106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.876137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.877465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.877880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.877902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.878498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.878799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.878830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.879201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.879672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.879703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.880658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.881061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.881106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.881524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.881833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.881863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.882341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.882676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.882705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.883203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.883796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.883825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.884103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.884498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.884528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.884928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.885304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.885335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.885762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.886170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.886214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.886645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.887095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.887133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.887597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.888093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.888109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.307 qpair failed and we were unable to recover it. 00:24:08.307 [2024-04-27 00:58:00.888501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.888930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.307 [2024-04-27 00:58:00.888960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.889584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.889997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.890026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.890549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.891009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.891039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.891481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.891871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.891885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.892249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.892574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.892604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.893091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.893476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.893506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.893936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.894334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.894365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.894760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.895246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.895276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.895683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.896080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.896129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.896589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.896988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.897017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.897337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.897792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.897822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.898173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.898347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.898376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.898798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.899196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.899227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.899709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.900095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.900126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.900475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.900945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.900974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.901334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.901793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.901822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.902256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.902718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.902748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.903157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.903551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.903581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.903997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.904397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.904427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.904913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.905364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.905395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.905852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.906334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.906366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.906770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.907143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.907158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.907522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.907892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.907922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.908411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.908818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.908848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.909215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.909543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.909572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.909984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.910319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.910351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.910765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.911226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.911256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.911674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.912012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.912042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.912393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.912864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.912894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.308 qpair failed and we were unable to recover it. 00:24:08.308 [2024-04-27 00:58:00.913232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.308 [2024-04-27 00:58:00.913631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.913660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.914139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.914496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.914526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.914930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.915346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.915377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.915858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.916269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.916312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.916753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.917215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.917245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.917587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.918063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.918102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.918530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.918934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.918949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.919324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.919665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.919695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.920158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.920617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.920646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.921105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.921491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.921521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.921920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.922254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.922284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.922676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.922906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.922936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.923338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.923737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.923767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.924105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.924548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.924577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.925035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.925441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.925472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.925884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.926267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.926306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.926746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.927199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.927229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.927647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.928062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.928101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.928507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.928900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.928930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.929387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.929794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.929823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.930307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.930698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.930727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.931204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.931608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.931622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.932043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.932381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.932395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.932758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.933238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.933268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.309 qpair failed and we were unable to recover it. 00:24:08.309 [2024-04-27 00:58:00.933532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.934000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.309 [2024-04-27 00:58:00.934029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.934526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.934986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.935016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.935454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.935884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.935914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.936319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.936793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.936823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.937296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.937774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.937803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.938281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.938669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.938683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.939062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.939281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.939310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.939768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.940171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.940203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.940695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.941105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.941135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.941624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.942016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.942030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.942428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.942850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.942879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.943286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.943465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.943494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.943974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.944378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.944410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.944815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.945217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.945247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.945726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.946181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.946212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.946602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.947053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.947101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.947452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.947904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.947934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.948411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.948893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.948923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.949320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.949723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.949752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.950218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.950673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.950703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.951187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.951576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.951606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.951963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.952392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.952423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.952661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.953063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.953101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.953558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.953969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.953998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.954414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.954866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.954895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.955302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.955687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.955700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.956024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.956442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.956472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.956872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.957337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.957368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.310 [2024-04-27 00:58:00.957850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.958324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.310 [2024-04-27 00:58:00.958354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.310 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.958697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.959091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.959121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.959553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.959968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.959997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.960347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.960684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.960714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.961171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.961624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.961654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.962132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.962582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.962612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.963009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.963472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.963503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.963919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.964307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.964337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.964774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.965250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.965281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.965687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.966135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.966165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.966583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.966984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.967013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.967425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.967832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.967862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.968287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.968679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.968708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.969108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.969501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.969530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.969934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.970405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.970435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.970826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.971133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.971147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.971543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.971925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.971955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.972348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.972697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.972726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.973140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.973598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.973627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.974019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.974406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.974437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.974850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.975182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.975212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.975668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.976086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.976116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.976524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.976923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.976952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.977350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.977755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.977785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.978190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.978595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.978624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.979105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.979448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.979477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.979901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.980374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.980404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.980798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.981192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.981222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.981564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.982019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.982033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.311 qpair failed and we were unable to recover it. 00:24:08.311 [2024-04-27 00:58:00.982429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.982822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.311 [2024-04-27 00:58:00.982836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.312 qpair failed and we were unable to recover it. 00:24:08.312 [2024-04-27 00:58:00.983217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.312 [2024-04-27 00:58:00.983691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.312 [2024-04-27 00:58:00.983720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.312 qpair failed and we were unable to recover it. 00:24:08.312 [2024-04-27 00:58:00.984061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.312 [2024-04-27 00:58:00.984300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.312 [2024-04-27 00:58:00.984329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.312 qpair failed and we were unable to recover it. 00:24:08.312 [2024-04-27 00:58:00.984786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.312 [2024-04-27 00:58:00.985169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.312 [2024-04-27 00:58:00.985184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.312 qpair failed and we were unable to recover it. 00:24:08.312 [2024-04-27 00:58:00.985512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.312 [2024-04-27 00:58:00.985906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.312 [2024-04-27 00:58:00.985936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.312 qpair failed and we were unable to recover it. 00:24:08.312 [2024-04-27 00:58:00.986282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.312 [2024-04-27 00:58:00.986711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.312 [2024-04-27 00:58:00.986741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.312 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.987192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.987635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.987650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.577 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.988076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.988500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.988535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.577 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.988932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.989334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.989364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.577 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.989845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.990286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.990317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.577 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.990664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.991024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.991053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.577 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.991519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.991926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.991955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.577 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.992435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.992888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.992917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.577 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.993320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.993738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.993753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.577 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.994182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.994634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.994664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.577 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.995122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.995510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.995540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.577 qpair failed and we were unable to recover it. 00:24:08.577 [2024-04-27 00:58:00.995942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.577 [2024-04-27 00:58:00.996394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:00.996424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:00.996823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:00.997310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:00.997344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:00.997851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:00.998271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:00.998286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:00.998663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:00.999078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:00.999109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:00.999571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:00.999973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.000003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.000406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.000863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.000893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.001306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.001740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.001769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.002249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.002655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.002685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.003141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.003611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.003641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.004048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.004466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.004497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.004895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.005314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.005345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.005841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.006294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.006325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.006733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.007150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.007181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.007660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.008145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.008181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.008640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.009022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.009036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.009426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.009825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.009839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.010153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.010517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.010531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.010924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.011132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.011147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.011518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.011964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.011980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.012366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.012746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.012760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.013189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.013485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.013515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.013908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.014381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.014411] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.014845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.015236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.015251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.015674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.016114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.016150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.016498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.016892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.016922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.017381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.017614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.017644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.018057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.018487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.018517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.018868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.019275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.019305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.019707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.020102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.578 [2024-04-27 00:58:01.020132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.578 qpair failed and we were unable to recover it. 00:24:08.578 [2024-04-27 00:58:01.020440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.020795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.020824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.021237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.021597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.021626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.022058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.022543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.022574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.023033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.023465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.023506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.023959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.024410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.024451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.024935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.025340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.025371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.026038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.026523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.026553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.026982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.027387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.027418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.027819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.028352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.028383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.028804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.029255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.029286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.029711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.029942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.029957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.030282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.030626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.030655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.031008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.031442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.031472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.031646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.032119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.032150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.032565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.032910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.032953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.033348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.033753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.033784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.034147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.034565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.034594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.034994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.035356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.035387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.035821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.036315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.036345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.036824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.037284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.037299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.037625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.038100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.038130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.038530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.039009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.039023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.039450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.039783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.039812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.040244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.040730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.040760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.041153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.041489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.041519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.041919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.042506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.042536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.043021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.043441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.043472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.043824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.044274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.044305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.579 qpair failed and we were unable to recover it. 00:24:08.579 [2024-04-27 00:58:01.044744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.045090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.579 [2024-04-27 00:58:01.045121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.045648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.046069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.046107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.046417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.046962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.046991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.047396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.047788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.047818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.048290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.048686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.048716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.049059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.049489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.049519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.049860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.050269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.050300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.050672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.050994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.051024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.051441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.051834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.051863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.052347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.052739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.052768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.053111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.053467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.053497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.053890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.054297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.054329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.054718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.055200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.055231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.055649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.056012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.056041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.056463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.056920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.056949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.057309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.057793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.057823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.058282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.058477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.058506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.058844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.059185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.059217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.059624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.059972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.060002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.060418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.060853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.060884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.061229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.061740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.061769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.062109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.062515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.062544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.063030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.063406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.063436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.064005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.064418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.064449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.064857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.065310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.065341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.065799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.066303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.066335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.066698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.067097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.067128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.067540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.067900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.067929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.580 [2024-04-27 00:58:01.068406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.068742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.580 [2024-04-27 00:58:01.068771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.580 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.069161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.069542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.069572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.070008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.070481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.070497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.070828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.071246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.071277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.071705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.072161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.072192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.072556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.073038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.073067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.073652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.074134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.074166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.074516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.075021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.075050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.075499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.076043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.076082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.076550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.076952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.076966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.077422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.078149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.078178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.078602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.079248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.079264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.079666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.080129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.080145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.080456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.080832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.080847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.081293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.081652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.081682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.082044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.082619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.082650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.083104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.083512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.083542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.084138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.084464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.084495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.084913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.085390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.085421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.085873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.086291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.086322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.086732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.087190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.087223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.087635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.088044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.088083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.088500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.088934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.088948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.089307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.089683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.089713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.091107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.091520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.091554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.091853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.092270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.092303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.092694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.093119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.093150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.093581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.094128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.094160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.094574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.095003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.095017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.581 qpair failed and we were unable to recover it. 00:24:08.581 [2024-04-27 00:58:01.095516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.581 [2024-04-27 00:58:01.095927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.095957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.096411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.096824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.096855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.097231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.097707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.097738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.098127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.098523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.098553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.098872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.099965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.099995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.100455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.101229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.101255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.101655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.102125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.102156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.102571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.103029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.103059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.103524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.103874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.103905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.104393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.104785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.104815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.105227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.105679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.105693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.106119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.106499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.106514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.106844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.107349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.107380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.108344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.108701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.108733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.109161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.109538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.109554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.109947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.110426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.110457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.110872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.111345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.111379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.111784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.112221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.112252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.112762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.113189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.113205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.113593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.114067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.114094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.114470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.114851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.114866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.115243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.115654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.115668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.116068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.116352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.116368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.116805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.117265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.582 [2024-04-27 00:58:01.117281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.582 qpair failed and we were unable to recover it. 00:24:08.582 [2024-04-27 00:58:01.117665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.118116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.118132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.118512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.118870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.118885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.119273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.119613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.119628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.120094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.120451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.120465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.120858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.121308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.121323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.121691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.122181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.122196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.122677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.123025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.123055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.123521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.123849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.123863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.124292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.124619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.124648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.125112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.125537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.125567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.126013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.126494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.126525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.127037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.127532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.127564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.127974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.128378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.128409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.128813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.129292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.129324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.129783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.130256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.130271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.130629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.131002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.131031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.131519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.131904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.131919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.132294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.132675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.132704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.133176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.133582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.133612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.134165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.134565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.134594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.134998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.135399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.135430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.135844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.136244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.136275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.136713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.137185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.137200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.137538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.137900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.137930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.138341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.138795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.138825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.139255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.139674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.139704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.140140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.140556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.140586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.141097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.141563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.141594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.583 [2024-04-27 00:58:01.142010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.142412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.583 [2024-04-27 00:58:01.142442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.583 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.142873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.143301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.143331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.143792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.144194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.144225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.144590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.145094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.145125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.145514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.145921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.145951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.146295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.146657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.146671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.147131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.147488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.147501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.147977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.148420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.148435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.148815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.149207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.149226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.149672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.150121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.150136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.150563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.151017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.151032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.151455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.152012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.152042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.152563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.152969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.152999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.153543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.153958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.153994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.154450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.154855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.154884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.155292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.155694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.155723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.156146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.156555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.156585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.157024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.157393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.157424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.157855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.158292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.158311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.158729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.159213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.159244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.159763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.160250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.160282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.160695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.161185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.161217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.161691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.162176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.162228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.162595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.163042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.163081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.163495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.163989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.164019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.164387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.164743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.164772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.165250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.165757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.165787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.166239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.166555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.166569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.167008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.167496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.167534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.584 qpair failed and we were unable to recover it. 00:24:08.584 [2024-04-27 00:58:01.167899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.168338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.584 [2024-04-27 00:58:01.168354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.168741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.169138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.169169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.169631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.170027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.170056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.170438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.170827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.170857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.171292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.171655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.171685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.172146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.172551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.172566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.172888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.173282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.173313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.173731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.174170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.174202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.174620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.175025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.175054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.175453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.175897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.175932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.176353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.176786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.176815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.177227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.177637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.177667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.178094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.178449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.178478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.178943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.179411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.179442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.179926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.180380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.180412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.180905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.181387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.181418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.181884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.182294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.182323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.182739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.183163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.183191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.183607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.184144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.184173] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.184524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.185026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.185055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.185562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.186036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.186067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.186534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.186988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.187018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.187483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.188481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.188510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.189019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.189461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.189494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.189930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.190355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.190388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.190794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.191130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.191146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.191601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.192049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.192089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.192458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.192944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.192975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.193423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.193933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.193964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.585 qpair failed and we were unable to recover it. 00:24:08.585 [2024-04-27 00:58:01.194404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.585 [2024-04-27 00:58:01.195681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.195709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.196110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.196448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.196464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.196848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.197091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.197107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.197350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.197641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.197671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.198132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.198602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.198633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.199066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.199500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.199530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.199893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.200354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.200370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.200792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.201186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.201217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.201687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.202179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.202210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.202621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.203135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.203167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.203605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.204131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.204163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.204562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.205010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.205040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.205559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.206089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.206121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.206543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.206960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.206990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.207495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.207911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.207941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.208394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.208754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.208784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.209287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.209702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.209733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.210230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.210648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.210678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.211146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.211562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.211592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.212018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.212539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.212570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.212942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.213435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.213468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.213890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.214378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.214409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.214783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.215206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.215238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.215670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.216061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.216094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.586 [2024-04-27 00:58:01.216433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.216766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.586 [2024-04-27 00:58:01.216797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.586 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.217245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.217655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.217686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.218135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.218489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.218520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.218880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.219304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.219335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.219707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.220201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.220233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.220658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.221046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.221101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.221521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.222002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.222033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.222465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.222902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.222933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.223398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.223810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.223840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.224347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.224716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.224747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.225238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.225615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.225631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.226061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.226482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.226512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.227044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.227419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.227451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.227947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.228365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.228397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.228816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.229280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.229297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.229704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.230180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.230197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.230650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.231270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.231301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.231688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.232151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.232183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.232612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.232953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.232982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.233435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.233854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.233885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.234389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.234818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.234849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.235348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.235829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.235859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.236341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.236810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.236841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.237363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.237860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.237890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.238327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.238745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.238775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.239338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.239701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.239732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.240208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.240653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.240683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.241171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.241618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.241648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.242094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.242586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.242616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.587 qpair failed and we were unable to recover it. 00:24:08.587 [2024-04-27 00:58:01.243130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.587 [2024-04-27 00:58:01.243601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.243631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.244121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.244552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.244583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.245131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.245552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.245582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.246128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.246622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.246652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.247060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.247521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.247552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.248054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.248515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.248546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.249129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.249648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.249679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.250113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.250537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.250575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.250981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.251430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.251461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.251887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.252374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.252390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.252781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.253234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.253266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.253654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.254147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.254163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.254612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.255102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.255135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.255568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.256022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.256053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.256437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.256801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.256832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.257272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.257713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.257745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.258255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.258688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.258720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.259184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.259672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.259703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.260193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.260619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.260650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.261160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.261530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.261562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.262001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.262603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.262619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.263012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.263405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.263436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.263818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.264232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.264264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.264697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.265167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.265185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.265573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.265946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.265976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.266482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.266872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.588 [2024-04-27 00:58:01.266888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.588 qpair failed and we were unable to recover it. 00:24:08.588 [2024-04-27 00:58:01.267345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.850 [2024-04-27 00:58:01.267720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.850 [2024-04-27 00:58:01.267738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.850 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.268139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.268557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.268572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.269096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.269524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.269554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.270028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.270463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.270479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.270940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.271337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.271369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.271828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.272274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.272306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.272874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.273336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.273352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.273694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.274180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.274197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.274595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.275110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.275143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.275588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.276062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.276102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.276613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.277028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.277058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.277498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.277926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.277956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.278420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.278862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.278893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.279386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.279786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.279818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.280246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.280742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.280772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.281310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.281780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.281811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.282247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.282650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.282681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.283185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.283593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.283623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.284100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.284509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.284539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.285033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.285516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.285533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.285869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.286332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.286349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.286787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.287226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.287243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.287680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.288182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.288219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.288705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.289228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.289260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.289797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.290272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.290305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.290799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.291290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.291323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.291828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.292347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.292379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.292837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.293204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.293243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.293638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.294090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.851 [2024-04-27 00:58:01.294122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.851 qpair failed and we were unable to recover it. 00:24:08.851 [2024-04-27 00:58:01.294602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.295095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.295127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.295513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.295936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.295967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.296413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.296845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.296875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.297337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.297762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.297798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.298329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.298704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.298735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.299238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.299608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.299639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.300135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.300511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.300542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.301029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.301469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.301501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.301878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.302268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.302285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.302634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.303068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.303112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.303679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.304155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.304189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.304621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.305057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.305098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.305509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.305959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.305989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.306472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.306905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.306940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.307442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.307888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.307919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.308359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.308861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.308892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.309308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.309725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.309756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.310229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.310641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.310672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.311110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.311615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.311646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.312221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.312656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.312687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.313198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.313613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.313645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.314159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.314636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.314668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.315165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.315588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.315619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.316100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.316574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.316611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.317058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.317503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.317533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.318018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.318408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.318440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.318819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.319240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.319272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.319689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.320184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.320216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.852 [2024-04-27 00:58:01.320650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.321080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.852 [2024-04-27 00:58:01.321112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.852 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.321470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.321903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.321934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.322428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.322849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.322880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.323434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.323793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.323823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.324240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.324650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.324681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.325105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.325586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.325617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.326263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.326895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.326926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.327418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.327838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.327869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.328370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.328718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.328748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.329236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.329663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.329700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.330193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.330767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.330797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.331169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.331827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.331843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.332238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.332576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.332592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.333043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.333432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.333464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.333848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.334371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.334402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.334836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.335253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.335285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.335904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.337090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.337124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.337589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.338097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.338129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.338630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.338976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.339007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.339507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.340491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.340525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.341044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.341504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.341536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.342095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.342542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.342573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.343020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.343467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.343500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.343983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.344459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.344492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.344960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.345316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.345348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.853 [2024-04-27 00:58:01.345814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.346305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.853 [2024-04-27 00:58:01.346337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.853 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.346776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.347166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.347183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.347529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.347982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.348013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.348434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.348916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.348932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.350037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.350545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.350595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.351998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.352478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.352515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.352958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.353428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.353461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.353814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.354255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.354271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.355205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.355631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.355649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.356115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.356546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.356577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.357096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.357550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.357581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.358164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.358539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.358569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.359136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.359516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.359547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.360067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.360506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.360538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.360904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.361309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.361342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.361756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.362169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.362201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.362630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.363045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.363088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.363516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.363948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.363966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.364446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.364871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.364902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.365409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.365826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.365856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.366285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.366803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.366833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.367339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.367719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.367751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.368267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.368634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.368665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.369023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.369421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.369453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.369832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.370383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.370427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.370778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.371369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.371400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.371830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.372279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.372311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.372653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.373150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.373183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.373637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.374147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.374180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.854 qpair failed and we were unable to recover it. 00:24:08.854 [2024-04-27 00:58:01.374564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.854 [2024-04-27 00:58:01.375039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.375101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.375484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.375906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.375937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.376502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.377000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.377030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.377493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.377862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.377892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.378387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.378798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.378829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.379329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.379794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.379824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.380246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.380595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.380627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.381142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.381512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.381543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.382118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.382539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.382571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.383110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.383498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.383529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.384069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.384460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.384490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.384975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.385486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.385518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.385982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.386497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.386529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.387015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.387487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.387519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.387990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.388472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.388489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.388898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.389324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.389356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.389862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.390367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.390384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.390731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.391210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.391243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.391686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.392169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.392201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.392563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.393026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.393057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.393500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.393873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.393904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.394423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.394908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.394939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.395411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.395837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.395868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.855 qpair failed and we were unable to recover it. 00:24:08.855 [2024-04-27 00:58:01.396342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.855 [2024-04-27 00:58:01.396763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.396794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.397329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.397754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.397784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.398320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.398751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.398781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.399296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.399801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.399817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.400223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.400600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.400631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.401082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.401502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.401534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.402062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.402527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.402559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.403108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.403540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.403572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.404125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.404628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.404659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.405095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.405526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.405557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.405998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.406521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.406553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.407034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.407480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.407512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.408016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.408452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.408484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.408919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.409397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.409442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.409839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.410339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.410371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.410844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.411211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.411228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.411632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.412112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.412145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.412623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.413069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.413120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.413490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.413965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.413996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.414425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.414873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.414914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.415320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.415689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.415720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.416166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.416538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.416569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.417021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.417521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.417554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.418094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.418461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.418491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.418919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.419339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.419371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.419846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.420338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.420370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.420748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.421221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.421254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.421629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.422083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.422115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.422554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.423044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.856 [2024-04-27 00:58:01.423086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.856 qpair failed and we were unable to recover it. 00:24:08.856 [2024-04-27 00:58:01.423596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.424133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.424166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.424556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.424917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.424948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.425368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.425809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.425840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.426336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.426714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.426744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.427127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.427503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.427534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.427877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.428363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.428396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.428816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.429319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.429352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.429729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.430114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.430147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.430600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.431113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.431145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.431521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.431945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.431975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.432419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.432781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.432811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.433312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.433687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.433716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.434146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.434569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.434600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.435032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.435461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.435494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.435935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.436281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.436298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.436743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.437250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.437283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.437813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.438260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.438292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.438881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.439425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.439457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.439840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.440217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.440250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.440663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.441096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.441129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.441584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.441961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.441998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.442420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.442904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.442920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.443394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.443842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.443873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.444504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.445059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.445100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.445582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.446064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.446105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.446540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.446910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.446940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.447372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.447842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.447873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.448298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.448741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.448772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.857 [2024-04-27 00:58:01.449314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.449741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.857 [2024-04-27 00:58:01.449772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.857 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.450166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.450579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.450609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.451125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.451539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.451575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.452004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.452444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.452477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.452836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.453236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.453267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.453716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.454211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.454242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.454673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.455106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.455138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.455560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.455933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.455964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.456405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.456751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.456782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.457231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.457684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.457714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.458174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.458601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.458632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.459135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.459583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.459614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.460160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.460675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.460711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.461228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.461588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.461628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.462153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.462518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.462548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.463000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.463416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.463449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.463873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.464298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.464331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.464745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.465183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.465216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.465644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.466173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.466205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.466653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.467176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.467209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.467574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.467905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.467936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.468363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.468774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.468805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.469340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.469763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.469798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.470227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.470722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.470752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.471250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.471626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.471657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.472082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.472506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.472537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.472993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.473382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.473398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.473793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.474297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.474349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.858 [2024-04-27 00:58:01.474909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.475402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.858 [2024-04-27 00:58:01.475434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.858 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.475965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.476395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.476427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.476904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.477328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.477361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.477806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.478302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.478335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.478830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.479264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.479297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.479750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.480186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.480219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.480648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.481133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.481167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.481618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.482114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.482146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.482599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.483129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.483161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.483685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.484180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.484212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.484736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.485156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.485188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.485677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.486171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.486203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.486649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.487057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.487098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.487595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.488053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.488094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.488509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.488857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.488888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.489310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.489731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.489762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.490268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.490679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.490709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.491138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.491607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.491638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.492165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.492577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.492608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.493061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.493431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.493462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.493938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.494344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.494376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.494849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.495344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.495376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.495881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.496324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.496356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.496851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.497322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.497354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.497778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.498221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.498254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.498636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.499045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.499084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.499584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.500083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.500115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.500590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.501089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.501121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.501641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.502109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.502142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.859 [2024-04-27 00:58:01.502595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.502957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.859 [2024-04-27 00:58:01.502988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.859 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.503466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.503959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.503990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.504463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.504961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.504992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.505536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.505955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.505986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.506533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.506971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.507001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.507421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.507901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.507932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.508458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.508884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.508914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.509424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.509921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.509952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.510406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.510897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.510927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.511357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.511869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.511899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.512342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.512816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.512847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.513351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.513864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.513895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.514337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.514752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.514783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.515290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.515793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.515824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.516320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.516762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.516794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.517200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.517671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.517703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.518176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.518526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.518557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.518997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.519459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.519492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.519857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.520299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.520331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.520872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.521365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.521398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.521921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.522502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.522535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.523115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.523632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.523663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.524087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.524477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.524507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.524996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.525429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.525464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.525902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.526370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.526403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.860 qpair failed and we were unable to recover it. 00:24:08.860 [2024-04-27 00:58:01.526879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.860 [2024-04-27 00:58:01.527380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.527414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.527869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.528309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.528341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.528723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.529191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.529223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.529654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.530162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.530195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.530629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.531066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.531126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.531606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.532035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.532067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.532559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.532999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.533029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.533470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.534044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.534087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.534522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.535009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.535039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.535524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.535938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.535969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.536400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.536838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.536869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.537320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.537752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.537783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.538279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.538698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.538729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.539163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.539623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.539640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.540080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.540480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.540497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.540955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.541374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.541391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.541795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.542195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:08.861 [2024-04-27 00:58:01.542226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:08.861 qpair failed and we were unable to recover it. 00:24:08.861 [2024-04-27 00:58:01.542702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.126 [2024-04-27 00:58:01.543207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.126 [2024-04-27 00:58:01.543223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.126 qpair failed and we were unable to recover it. 00:24:09.126 [2024-04-27 00:58:01.543575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.126 [2024-04-27 00:58:01.544035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.126 [2024-04-27 00:58:01.544052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.126 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.544453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.544920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.544951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.545467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.545881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.545911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.546345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.546857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.546888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.547389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.547788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.547819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.548337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.548806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.548837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.549356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.549753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.549784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.550259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.550679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.550710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.551229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.551696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.551727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.552236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.552701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.552733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.553194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.553643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.553674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.554160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.554528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.554559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.555049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.555481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.555512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.556025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.556509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.556541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.557032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.557416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.557448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.557918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.558390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.558423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.558810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.559304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.559321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.559809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.560306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.560338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.560793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.561297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.561329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.561722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.562231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.562270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.562615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.563111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.563144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.563591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.564028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.564058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.564495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.564990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.565021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.565533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.565993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.566024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.566544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.567040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.567083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.567522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.568031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.568061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.568608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.569088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.569120] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.569643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.570145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.570176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.570700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.571169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.571201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.127 qpair failed and we were unable to recover it. 00:24:09.127 [2024-04-27 00:58:01.571671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.127 [2024-04-27 00:58:01.572170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.572187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.572626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.573022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.573053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.573454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.573803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.573834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.574253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.574756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.574787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.575316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.575820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.575851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.576278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.576711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.576742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.577224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.577647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.577678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.578173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.578585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.578620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.579106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.579547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.579581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.580001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.580460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.580492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.580991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.581501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.581533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.582057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.582521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.582553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.583066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.583570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.583615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.584041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.584541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.584572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.585093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.585576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.585607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.586110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.586538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.586555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.586970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.587464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.587496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.588027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.588485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.588502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.588940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.589351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.589383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.589881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.590382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.590400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.590787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.591258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.591291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.591772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.592263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.592295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.592746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.593222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.593238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.593704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.594138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.594154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.594622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.595111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.595131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.595537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.596031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.596062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.596584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.597061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.597110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.128 [2024-04-27 00:58:01.597741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.598256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.128 [2024-04-27 00:58:01.598288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.128 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.598682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.599146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.599198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.599623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.600085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.600117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.600590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.601057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.601097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.601607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.602040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.602082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.602715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.603395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.603427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.603952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.604320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.604352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.604839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.605202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.605220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.605678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.606178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.606210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.606688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.607114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.607130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.607594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.607986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.608003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.608462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.608979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.609010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.609536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.610201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.610216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.610622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.611006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.611022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.611418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.611839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.611870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.612370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.612808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.612824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.613225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.613655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.613686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.614106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.614544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.614581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.614979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.615470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.615503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.616027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.616464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.616480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.616801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.617181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.617213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.617629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.618090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.618122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.129 [2024-04-27 00:58:01.618642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.619093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.129 [2024-04-27 00:58:01.619126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.129 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.619577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.620084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.620101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.620568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.620938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.620955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.621363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.621959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.621990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.622516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.622936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.622952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.623398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.623848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.623878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.624311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.624808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.624838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.625348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.625880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.625911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.626459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.626859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.626890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.627391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.627793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.627824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.628322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.628793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.628824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.629321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.629819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.629850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.630378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.630788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.630818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.631311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.631751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.631782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.632208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.632655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.632686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.633133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.633554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.633585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.634101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.634615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.634646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.635163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.635633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.635663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.636138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.636558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.636590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.636963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.637369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.637400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.637908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.638428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.638460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.638967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.639401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.639433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.639928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.640337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.640368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.640798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.641301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.641333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.641839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.642255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.642288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.642745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.643214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.643247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.130 [2024-04-27 00:58:01.643698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.644210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.130 [2024-04-27 00:58:01.644243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.130 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.644758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.645090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.645122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.645624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.646053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.646106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.646611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.647049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.647090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.647499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.647919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.647950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.648429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.648899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.648930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.649447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.649882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.649913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.650409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.650772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.650803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.651295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.651714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.651745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.652192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.652604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.652635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.653081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.653588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.653618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.654148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.654569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.654600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.655140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.655639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.655671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.656085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.656455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.656498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.656963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.657345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.657376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.657857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.658285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.658317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.658755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.659246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.659278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.659722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.660193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.660224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.660641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.661130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.661171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.661640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.662150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.662187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.662702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.663209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.663241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.663766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.664204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.664236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.131 qpair failed and we were unable to recover it. 00:24:09.131 [2024-04-27 00:58:01.664724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.665190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.131 [2024-04-27 00:58:01.665222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.665694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.666113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.666145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.666637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.667042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.667080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.667502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.667916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.667947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.668444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.668851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.668882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.669393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.669893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.669924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.670467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.670962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.670993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.671501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.672033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.672064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.672623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.673053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.673095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.673545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.673966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.673998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.674493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.674915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.674945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.675492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.675995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.676026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.676455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.676889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.676919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.677355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.677857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.677886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.678411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.678880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.678911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.679412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.679882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.679913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.680356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.680853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.680884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.681426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.681842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.681873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.682315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.682731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.682762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.683245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.683670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.683701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.684152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.684574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.684605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.685067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.685501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.685531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.686033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.686466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.686499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.686911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.687400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.687432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.687972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.688446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.688479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.688912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.689339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.689371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.689870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.690268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.690300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.690775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.691268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.691299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.132 [2024-04-27 00:58:01.691834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.692360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.132 [2024-04-27 00:58:01.692393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.132 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.692939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.693439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.693471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.693976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.694418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.694450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.694881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.695321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.695353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.695838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.696324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.696356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.696867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.697365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.697397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.697924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.698339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.698372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.698875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.699293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.699324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.699821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.700336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.700369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.700893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.701408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.701440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.701889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.702385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.702417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.702842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.703277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.703309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.703714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.704125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.704166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.704610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.705101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.705133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.705565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.706096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.706128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.706576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.706982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.707013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.707528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.708043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.708093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.708641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.709156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.709187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.709711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.710129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.710161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.710688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.711210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.711242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.711656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.712148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.712181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.712691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.713118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.713156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.713546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.713962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.713993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.714540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.714992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.715022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.715474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.715884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.715913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.716405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.716761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.716777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.717258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.717752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.717783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.718206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.718727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.718757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.719204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.719695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.719726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.133 qpair failed and we were unable to recover it. 00:24:09.133 [2024-04-27 00:58:01.720276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.133 [2024-04-27 00:58:01.720766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.720798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.721307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.721836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.721867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.722292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.722787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.722817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.723373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.723788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.723819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.724312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.724781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.724813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.725242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.725750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.725782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.726271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.726739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.726770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.727193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.727684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.727715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.728249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.728759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.728791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.729241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.729738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.729769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.730226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.730577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.730608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.731088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.731514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.731529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.731989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.732370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.732402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.732898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.733413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.733445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.733944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.734364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.734396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.734938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.735452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.735484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.735979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.736495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.736538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.737067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.737443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.737473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.737892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.738360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.738392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.738805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.739296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.739312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.739698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.740188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.740220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.740776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.741255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.741291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.741815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.742233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.742266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.742744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.743235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.743267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.743777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.744197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.744229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.744592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.745080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.745112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.745528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.745992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.746022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.746509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.746928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.746958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.134 [2024-04-27 00:58:01.747515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.748023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.134 [2024-04-27 00:58:01.748054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.134 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.748539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.749035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.749065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.749601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.749952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.749983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.750463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.750863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.750899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.751330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.751834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.751864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.752305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.752799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.752815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.753314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.753848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.753879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.754383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.754852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.754884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.755387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.755826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.755857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.756345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.756834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.756866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.757403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.757811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.757840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.758268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.758760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.758791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.759283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.759815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.759845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.760369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.760883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.760919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.761442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.761882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.761913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.762405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.762920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.762951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.763383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.763822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.763852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.764348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.764755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.764771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.765238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.765757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.765788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.766245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.766744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.766775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.767278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.767777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.767807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.768167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.768655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.768685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.769224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.769693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.769723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.770167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.770611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.770647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.771084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.771559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.771590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.772064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.772604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.772634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.773135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.773566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.773598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.774028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.774539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.774571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.775118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.775611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.775641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.135 qpair failed and we were unable to recover it. 00:24:09.135 [2024-04-27 00:58:01.776089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.135 [2024-04-27 00:58:01.776562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.776601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.777093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.777580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.777597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.778040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.778578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.778610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.779160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.779655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.779698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.780147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.780637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.780669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.781162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.781684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.781714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.782144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.782557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.782588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.783030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.783534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.783566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.784019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.784516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.784547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.785092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.785587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.785618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.786148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.786668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.786700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.787231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.787698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.787729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.788153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.788644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.788675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.789209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.789701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.789732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.790273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.790728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.790759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.791135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.791604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.791635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.792110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.792601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.792633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.793189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.793608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.793639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.794152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.794512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.794543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.795041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.795522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.795553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.796025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.796405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.796438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.796872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.797300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.797317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.797770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.798188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.798220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.798767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.799307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.799343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.136 qpair failed and we were unable to recover it. 00:24:09.136 [2024-04-27 00:58:01.799848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.136 [2024-04-27 00:58:01.800266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.800298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.800845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.801291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.801323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.801747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.802258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.802290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.802784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.803244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.803275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.803796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.804309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.804341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.804760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.805228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.805260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.805765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.806229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.806261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.806682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.807176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.807207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.807664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.808111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.808143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.808623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.808974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.808990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.809433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.809944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.809974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.810408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.810853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.810884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.811379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.811841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.811856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.812250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.812625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.812641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.813118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.813620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.813637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.814122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.814620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.814636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.815166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.815603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.137 [2024-04-27 00:58:01.815635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.137 qpair failed and we were unable to recover it. 00:24:09.137 [2024-04-27 00:58:01.816117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.816527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.816544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.816979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.817310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.817328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.817769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.818209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.818242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.818753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.819222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.819254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.819762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.820121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.820153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.820627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.821093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.821126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.821637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.822035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.822065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.822577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.822982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.823011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.823510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.824026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.824042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.824515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.824940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.824971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.825466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.825985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.826017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.826457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.826951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.826982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.827423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.827865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.827896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.828358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.828872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.828903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.829397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.829917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.829957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.830436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.830838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.830868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.831392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.831836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.831867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.832364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.832814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.832845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.833346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.833848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.833879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.834390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.834853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.834870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.835329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.835715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.835731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.836127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.836581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.836612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.837107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.837544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.837575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.838054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.838487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.838519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.405 qpair failed and we were unable to recover it. 00:24:09.405 [2024-04-27 00:58:01.839085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.405 [2024-04-27 00:58:01.839637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.839683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.840103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.840571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.840602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.841095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.841549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.841579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.842100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.842479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.842509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.843006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.843509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.843547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.843984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.844316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.844332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.844800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.845206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.845223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.845672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.846102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.846134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.846566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.847064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.847119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.847593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.848004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.848036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.848460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.848897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.848928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.849345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.849765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.849796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.850267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.850680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.850696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.851139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.851489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.851520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.851977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.852470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.852502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.852929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.853397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.853429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.853909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.854313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.854344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.854766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.855139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.855156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.855599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.856053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.856102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.856517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.857031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.857048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.857442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.857819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.857834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.858332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.858809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.858840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.859218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.859624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.859663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.860104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.860509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.860546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.860990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.861516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.861548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.862081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.862483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.862514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.863003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.863422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.863455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.863901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.864393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.864425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.406 [2024-04-27 00:58:01.864842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.865279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.406 [2024-04-27 00:58:01.865311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.406 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.865795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.866284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.866315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.866718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.867137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.867154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.867622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.868098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.868129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.868629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.869080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.869113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.869611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.870082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.870099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.870568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.870964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.870995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.871493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.872012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.872043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.872508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.873004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.873035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.873568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.874056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.874096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.874542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.875045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.875088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.875639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.876087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.876119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.876591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.877096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.877129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.877573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.877928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.877959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.878445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.878961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.878977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.879446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.879864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.879895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.880297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.880788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.880818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.881177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.881662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.881692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.882134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.882638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.882668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.883012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.883524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.883555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.884093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.884507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.884538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.885037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.885473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.885505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.885921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.886420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.886453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.886962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.887378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.887410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.887838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.888284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.888315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.888801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.889268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.889301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.889736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.890218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.890270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.890773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.891267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.891299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.891804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.892296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.892328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.407 qpair failed and we were unable to recover it. 00:24:09.407 [2024-04-27 00:58:01.892819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.407 [2024-04-27 00:58:01.893223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.893254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.893681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.894108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.894140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.894580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.895030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.895061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.895501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.895929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.895966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.896381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.896872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.896902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.897371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.897840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.897870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.898327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.898798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.898828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.899301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.899791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.899821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.900363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.900862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.900893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.901263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.901700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.901730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.902221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.902722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.902754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.903286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.903716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.903747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.904249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.904774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.904804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.905362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.905829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.905875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.906234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.906684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.906700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.907172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.907591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.907621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.908038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.908472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.908503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.908973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.909443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.909475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.909976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.910493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.910525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.911045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.911548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.911580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.912116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.912585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.912616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.913092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.913583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.913613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.914062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.914504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.914534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.915029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.915555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.915591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.916110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.916583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.916614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.917057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.917661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.917693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.918206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.918677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.918707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.919127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.919590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.919620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.920115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.920582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.920613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.408 qpair failed and we were unable to recover it. 00:24:09.408 [2024-04-27 00:58:01.921023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.408 [2024-04-27 00:58:01.921523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.921555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.922045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.922531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.922563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.922988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.923436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.923467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.923963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.924408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.924440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.924948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.925310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.925353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.925847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.926357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.926389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.926910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.927310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.927342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.927764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.928275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.928307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.928851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.929320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.929352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.929865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.930294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.930326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.930856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.931370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.931402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.931875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.932307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.932339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.932838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.933354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.933386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.933832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.934326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.934359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.934861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.935358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.935390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.935833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.936303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.936334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.936812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.937307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.937339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.937867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.938334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.938367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.938856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.939373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.939407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.939818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.940235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.940267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.940764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.941278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.941295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.941762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.942254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.942286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.942741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.943235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.943267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.409 qpair failed and we were unable to recover it. 00:24:09.409 [2024-04-27 00:58:01.943635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.409 [2024-04-27 00:58:01.944068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.944111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.944543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.945010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.945041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.945595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.945953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.945984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.946475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.946939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.946970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.947505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.947942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.947973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.948507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.949009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.949040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.949517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.950005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.950036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.950610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.951100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.951134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.951589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.952007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.952037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.952560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.952982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.953013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.953393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.953833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.953864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.954373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.954846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.954878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.955312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.955762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.955793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.956292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.956738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.956768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.957224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.957669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.957700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.958185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.958601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.958631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.959054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.959478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.959510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.959881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.960349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.960382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.960878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.961248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.961265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.961696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.962176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.962208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.962751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.963130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.963162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.963658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.964171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.964214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.964678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.965196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.965229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.965659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.966102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.966150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.966614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.967089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.967122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.967615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.968124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.968158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.968607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.969023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.969055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.969557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.969991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.970023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.970432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.970819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.410 [2024-04-27 00:58:01.970849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.410 qpair failed and we were unable to recover it. 00:24:09.410 [2024-04-27 00:58:01.971355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.971727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.971759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.972161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.972632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.972663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.973170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.973598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.973629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.974135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.974491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.974522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.974953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.975443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.975476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.976012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.976437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.976470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.976995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.977470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.977503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.978042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.978538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.978571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.979086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.979581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.979612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.980147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.980576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.980608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.981168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.981508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.981541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.981961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.982389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.982421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.982909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.983326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.983358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.983879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.984316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.984349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.984783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.985278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.985311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.985793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.986288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.986320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.986756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.987247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.987279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.987634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.988061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.988105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.988585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.989086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.989119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.989541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.989882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.989913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.990351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.990819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.990849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.991276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.991755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.991787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.992288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.992711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.992741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.993245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.993723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.993755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.994186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.994467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.994499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.994938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.995409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.995441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.995881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.996279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.996311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.996814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.997277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.997294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.411 [2024-04-27 00:58:01.997596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.998062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.411 [2024-04-27 00:58:01.998106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.411 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:01.998481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:01.998970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:01.999001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:01.999439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:01.999934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:01.999965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.000374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.000864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.000896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.001310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.001842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.001871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.002376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.002842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.002871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.003376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.003804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.003835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.004343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.004762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.004792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.005213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.005634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.005666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.006100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.006366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.006382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.006771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.007235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.007267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.007686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.008058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.008099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.008515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.008938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.008968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.009464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.009886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.009916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.010356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.010769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.010799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.011270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.011764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.011796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.012241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.012653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.012683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.013102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.013590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.013621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.014030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.014456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.014488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.014982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.015378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.015410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.015777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.016274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.016305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.016726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.017084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.017116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.017602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.018064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.018104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.018513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.018942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.018973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.019319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.019664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.019694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.020131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.020635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.020666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.021092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.021500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.021530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.021896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.022334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.022366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.022858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.023266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.412 [2024-04-27 00:58:02.023299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.412 qpair failed and we were unable to recover it. 00:24:09.412 [2024-04-27 00:58:02.023797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.024223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.024239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.024630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.024998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.025029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.025544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.026064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.026103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.026547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.026910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.026940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.027423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.027836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.027867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.028180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.028604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.028635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.029056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.029561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.029592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.030044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.030521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.030551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.031047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.031529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.031560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.032057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.032530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.032562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.033062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.033498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.033529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.034030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.034468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.034499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.034939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.035444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.035476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.035885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.036296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.036339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.036772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.037214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.037246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.037656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.038121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.038152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.038687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.039190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.039207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.039651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.040090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.040122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.040557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.041010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.041041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.041479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.041978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.041994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.042477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.042888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.042918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.043330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.043774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.043804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.044204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.044635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.044665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.045094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.045534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.045564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.046018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.046429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.046460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.046887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.047291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.047323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.047794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.048257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.048294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.048815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.049230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.049261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.413 qpair failed and we were unable to recover it. 00:24:09.413 [2024-04-27 00:58:02.049748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.050240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.413 [2024-04-27 00:58:02.050272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.050699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.051208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.051240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.051671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.052178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.052195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.052662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.053177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.053209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.053723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.054210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.054242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.054783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.055294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.055327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.055847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.056313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.056346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.056832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.057188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.057221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.057639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.058153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.058205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.058678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.059112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.059146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.059632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.060119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.060151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.060687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.061202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.061234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.061667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.062173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.062205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.062635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.063144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.063177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.063721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.064189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.064222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.064583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.065097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.065130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.065660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.066082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.066114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.066655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.067098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.067130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.067640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.068151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.068202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.068678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.069149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.069181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.069538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.070027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.070058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.070496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.071054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.071097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.071645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.072112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.072147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.072622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.073020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.073052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.414 [2024-04-27 00:58:02.073506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.073985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.414 [2024-04-27 00:58:02.074015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.414 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.074534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.075123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.075155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.075615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.076128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.076160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.076611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.077125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.077156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.077699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.078127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.078160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.078646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.079125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.079158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.079632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.080040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.080056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.080470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.080856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.080887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.081365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.081725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.081755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.082224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.082697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.082728] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.083165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.083640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.083670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.084094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.084587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.084627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.085104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.085477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.085508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.085989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.086380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.086412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.086881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.087323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.087340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.087811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.088223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.088241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.088721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.089221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.089253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.089741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.090236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.090268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.090762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.091267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.091284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.091676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.092170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.092202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.415 [2024-04-27 00:58:02.092610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.093105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.415 [2024-04-27 00:58:02.093122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.415 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.093629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.094083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.094100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.094482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.094884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.094900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.095313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.095739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.095769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.096283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.096615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.096646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.097316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.097753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.097771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.098218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.098612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.098628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.099117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.099512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.099528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.099922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.100372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.100389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.100838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.101261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.101277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.101695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.102091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.102108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.102442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.102883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.102899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.103369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.103832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.103850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.104247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.104695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.104711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.105171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.105645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.105662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.106120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.106445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.106461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.106923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.107383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.107400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.107786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.108251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.108267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.108757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.109174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.109206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.109572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.110095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.110112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.110606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.111101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.111133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.111544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.111984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.112015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.112478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.112990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.113005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.113407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.113801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.681 [2024-04-27 00:58:02.113817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.681 qpair failed and we were unable to recover it. 00:24:09.681 [2024-04-27 00:58:02.114280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.114672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.114703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.115379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.115821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.115852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.116210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.116778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.116810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.117286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.117703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.117719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.118108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.118517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.118550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.119049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.119488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.119522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.119963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.120381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.120414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.120831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.121248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.121279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.121727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.122158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.122191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.122692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.123157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.123190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.123618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.124061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.124105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.124651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.125006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.125037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.125474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.125906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.125938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.126416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.126776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.126807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.127308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.127751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.127782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.128287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.128695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.128727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.129233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.129670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.129702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.130202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.130674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.130704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.131229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.131699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.131715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.132195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.132600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.132631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.133132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.133565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.133595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.134125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.134590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.134621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.134997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.135490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.135523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.136035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.136517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.136550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.136974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.137383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.137415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.137911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.138429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.138462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.138892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.139325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.139358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.139796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.140150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.140182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.140654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.141028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.141059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.141550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.142052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.142105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.142483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.142904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.142920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.143324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.143753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.143785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.144296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.144709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.144740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.145168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.145663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.145695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.146152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.146572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.146603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.147100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.147571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.147602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.148035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.148513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.148545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.148982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.149396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.149428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.149923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.150291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.150323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.150801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.151166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.151197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.151616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.152109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.152140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.152640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.153144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.153176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.153607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.154026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.154057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.154558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.155050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.155094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.155539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.156004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.156034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.156472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.156889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.156919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.157350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.157782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.157813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.158237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.158723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.158755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.682 [2024-04-27 00:58:02.159198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.159579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.682 [2024-04-27 00:58:02.159609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.682 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.160028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.160521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.160552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.160973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.161386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.161425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.161834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.162250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.162281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.162648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.162992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.163023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.163442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.163928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.163959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.164306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.164726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.164757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.165251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.165656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.165686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.166184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.166528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.166558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.166984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.167482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.167514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.167944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.168456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.168488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.168856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.169321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.169354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.169779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.170207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.170239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.170708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.171179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.171211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.171662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.172128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.172162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.172638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.173106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.173139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.173611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.174189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.174221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.174727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.175221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.175254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.175727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.176123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.176154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.176573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.176925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.176956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.177533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.178025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.178056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.178441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.178853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.178884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.179353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.179821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.179852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.180340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.180827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.180858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.181348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.181805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.181835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.182194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.182597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.182626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.183052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.183550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.183581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.184049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.184522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.184553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.184964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.185449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.185481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.185976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.186369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.186401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.186809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.187272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.187303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.187773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.188234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.188266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.188735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.189224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.189256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.189747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.190233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.190270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.190766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.191204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.191236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.191612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.192022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.192052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.192481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.192896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.192926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.193396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.193879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.193909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.194324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.194744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.194774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.195249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.195644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.195674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.195979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.196476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.196507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.196991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.197472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.197503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.197704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.198182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.198213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.198639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.199032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.199067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.199488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.199869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.199899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.200382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.200866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.200896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.201309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.201733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.201762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.202173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.202406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.202437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.202857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.203264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.203295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.203625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.204108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.204139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.204626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.205053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.205095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.683 [2024-04-27 00:58:02.205581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.205921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.683 [2024-04-27 00:58:02.205951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.683 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.206375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.206861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.206892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.207382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.207795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.207831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.208245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.208726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.208756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.209220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.209680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.209710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.210180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.210541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.210572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.211058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.211474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.211504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.211847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.212034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.212064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.212563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.212962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.212992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.213453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.213936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.213965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.214315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.214711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.214741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.215170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.215631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.215662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.216154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.216500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.216544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.217035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.217394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.217426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.217890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.218373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.218405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.218893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.219360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.219391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.219790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.220272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.220303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.220786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.221173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.221204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.221623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.222031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.222061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.222564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.223019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.223049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.223546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.223897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.223928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.224334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.224726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.224756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.225179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.225658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.225689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.226157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.226556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.226585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.226986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.227410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.227441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.227928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.228387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.228419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.228820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.229255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.229286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.229680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.230106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.230137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.230566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.230972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.231001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.231417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.231870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.231901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.232332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.232763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.232793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.233278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.233694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.233724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.234157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.234394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.234424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.234939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.235393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.235424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.235841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.236232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.236264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.236662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.237126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.237157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.237640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.237917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.237947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.238340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.238734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.238764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.239195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.239679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.239709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.240134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.240556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.240586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.241044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.241290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.241319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.241722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.242109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.242141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.242645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.242988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.243018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.243457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.243845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.243875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.684 qpair failed and we were unable to recover it. 00:24:09.684 [2024-04-27 00:58:02.244278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.244683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.684 [2024-04-27 00:58:02.244713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.245052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.245466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.245496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.245957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.246438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.246469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.246875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.247270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.247302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.247766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.248153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.248184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.248646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.249125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.249157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.249624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.250086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.250116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.250483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.250868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.250897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.251356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.251779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.251808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.252271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.252660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.252690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.253087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.253557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.253586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.254064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.254554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.254584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.255047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.255529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.255560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.255987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.256442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.256473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.256819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.257273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.257305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.257708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.258126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.258157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.258617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.259005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.259035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.259440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.259776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.259803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.260216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.260669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.260698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.261191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.261524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.261553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.261960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.262372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.262404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.262823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.263055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.263094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.263500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.263901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.263930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.264435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.264842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.264872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.265306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.265709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.265739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.266197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.266653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.266683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.267013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.267434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.267465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.267880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.268331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.268362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.268822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.269227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.269257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.269745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.270166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.270196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.270687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.271026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.271055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.271471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.271946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.271976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.272368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.272766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.272795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.273204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.273690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.273719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.274154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.274652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.274682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.275028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.275377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.275392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.275838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.276215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.276247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.276715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.277166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.277197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.277679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.278101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.278132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.278539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.278929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.278959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.279441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.279849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.279878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.280302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.280754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.280784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.281198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.281599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.281628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.282002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.282367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.282382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.282766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.283172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.283203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.283599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.284008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.284038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.284470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.284900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.284929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.285356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.285834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.285877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.286312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.286700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.286731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.287148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.287598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.287635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.287997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.288396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.288426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.288904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.289306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.289337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.289737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.290126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.290156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.290548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.290872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.290901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.685 [2024-04-27 00:58:02.291375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.291771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.685 [2024-04-27 00:58:02.291801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.685 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.292282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.292683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.292713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.293170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.293643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.293673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.294160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.294487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.294517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.294922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.295397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.295428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.295833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.296261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.296292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.296736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.297139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.297170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.297513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.297966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.297996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.298476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.298888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.298918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.299375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.299850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.299880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.300220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.300609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.300639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.301120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.301505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.301535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.301955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.302187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.302217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.302673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.303140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.303170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.303603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.304019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.304049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.304518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.304946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.304976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.305388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.305863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.305893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.306317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.306737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.306768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.307229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.307707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.307737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.308202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.308590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.308620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.309022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.309439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.309454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.309605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.310001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.310030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.310439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.310841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.310870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.311274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.311702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.311732] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.312163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.312513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.312543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.312961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.313289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.313320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.313745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.314133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.314164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.314560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.315057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.315107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.315538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.315874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.315903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.316318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.316791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.316820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.317327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.317785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.317815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.318211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.318486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.318516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.319021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.319402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.319433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.319862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.320337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.320353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.320799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.321187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.321217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.321695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.322049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.322092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.322527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.322998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.323037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.323399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.323747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.323776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.324197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.324604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.324634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.325117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.325349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.325379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.325837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.326226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.326257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.326754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.327205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.327236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.327639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.328038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.328068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.328405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.328802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.328832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.329236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.329593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.329608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.329999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.330395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.330413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.330884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.331199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.331230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.331665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.332086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.332101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.332417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.332875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.332889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.686 qpair failed and we were unable to recover it. 00:24:09.686 [2024-04-27 00:58:02.333272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.333724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.686 [2024-04-27 00:58:02.333754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.334148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.334566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.334596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.334937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.335332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.335347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.335775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.336198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.336213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.336592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.336997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.337027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.337433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.337666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.337695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.338154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.338543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.338578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.339005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.339343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.339358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.339730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.340135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.340166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.340616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.341066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.341105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.341515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.341918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.341947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.342403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.342749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.342779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.343231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.343701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.343716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.344169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.344521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.344550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.344967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.345364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.345394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.345836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.346430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.346445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.346896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.347348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.347388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.347889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.348296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.348327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.348735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.349185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.349216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.349648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.349863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.349892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.350241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.350545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.350559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.350961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.351397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.351428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.351852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.352301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.352332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.352673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.353014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.353028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.353461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.353856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.353886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.354213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.354856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.354886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.355131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.355535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.355570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.356025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.356475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.356506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.356933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.357355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.357370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.357683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.358093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.358125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.358607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.359214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.359229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.359664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.360137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.360152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.360534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.361017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.361047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.361530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.362148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.362179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.362576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.362899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.362928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.363317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.363677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.363691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.364001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.364398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.364429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.364942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.365413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.365428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.365737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.366112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.366127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.366421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.366790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.366805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.367194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.367601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.367631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.367990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.368445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.368461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.368849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.369212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.369243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.369635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.370036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.687 [2024-04-27 00:58:02.370065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.687 qpair failed and we were unable to recover it. 00:24:09.687 [2024-04-27 00:58:02.370496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.370766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.370781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.371217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.371604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.371618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.372045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.372473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.372487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.372924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.373346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.373361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.373731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.374129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.374159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.374635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.375038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.375067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.375577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.375969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.375999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.376413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.376763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.376793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.377289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.377682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.377711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.378144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.378544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.378574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.378964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.379419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.379450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.379848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.380251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.380282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.380599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.380934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.380965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.381388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.381777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.381808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.382226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.382697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.382726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.383151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.383555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.383585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.383923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.384248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.384279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.384668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.385149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.385179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.385652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.386055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.386105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.955 qpair failed and we were unable to recover it. 00:24:09.955 [2024-04-27 00:58:02.386624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.955 [2024-04-27 00:58:02.387048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.387092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.387512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.387988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.388018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.388434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.388836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.388866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.389269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.389689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.389718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.390213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.390586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.390616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.391022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.391503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.391518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.391883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.392237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.392267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.392750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.393202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.393233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.393647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.394097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.394128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.394461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.394852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.394881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.395290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.395771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.395800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.396302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.396705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.396735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.397215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.397612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.397642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.398036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.398468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.398483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.398847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.399320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.399351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.399776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.400257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.400288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.400797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.401233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.401264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.401675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.402149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.402180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.402534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.402921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.402951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.403363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.403756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.403786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.404194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.404650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.404679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.405136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.405526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.405557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.406033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.406425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.406456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.406934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.407405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.407436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.407682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.408025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.408054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.408455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.408859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.408889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.409375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.409849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.409879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.410357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.410685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.410714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.956 [2024-04-27 00:58:02.411092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.411570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.956 [2024-04-27 00:58:02.411600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.956 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.412039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.412449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.412480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.412879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.413336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.413367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.413782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.414205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.414248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.414671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.415138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.415168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.415577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.415987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.416017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.416456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.416863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.416893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.417352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.417805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.417834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.418294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.418817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.418846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.419238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.419694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.419724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.420131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.420541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.420571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.420981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.421330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.421361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.421842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.422246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.422276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.422689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.423144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.423159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.423487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.423982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.424019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.424415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.424750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.424779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.425191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.425593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.425623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.426112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.426580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.426595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.426962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.427254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.427270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.427660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.428084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.428115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.428519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.428858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.428887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.429350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.429825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.429855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.430338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.430576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.430605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.431089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.431492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.431522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.431999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.432478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.432509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.432900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.433285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.433316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.433793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.434201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.434232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.434632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.435067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.435117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.435564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.436035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.436065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.957 qpair failed and we were unable to recover it. 00:24:09.957 [2024-04-27 00:58:02.436557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.957 [2024-04-27 00:58:02.437031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.437061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.437472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.437862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.437891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.438375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.438673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.438702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.438941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.439437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.439468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.439973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.440444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.440475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.440833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.441289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.441319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.441724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.442130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.442161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.442595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.443011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.443041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.443470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.443924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.443953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.444359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.444845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.444875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.445335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.445725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.445755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.446153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.446585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.446614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.447043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.447514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.447544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.448000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.448496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.448526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.448937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.449413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.449443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.449848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.450333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.450364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.450760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.451088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.451119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.451461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.451918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.451947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.452413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.452886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.452900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.453221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.453624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.453653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.454053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.454404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.454434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.454841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.455195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.455226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.455629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.456014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.456044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.456546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.456939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.456969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.457372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.457824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.457855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.458279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.458736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.458765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.459190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.459675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.459705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.460161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.460526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.460556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.958 qpair failed and we were unable to recover it. 00:24:09.958 [2024-04-27 00:58:02.461032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.958 [2024-04-27 00:58:02.461518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.461533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.461918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.462367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.462398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.462870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.463270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.463285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.463562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.463945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.463974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.464346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.464822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.464851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.465277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.465679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.465708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.466098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.466489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.466518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.467006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.467405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.467436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.467878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.468186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.468201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.468509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.468868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.468903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.469308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.469732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.469762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.470182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.470517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.470532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.470910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.471329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.471360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.471777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.472203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.472240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.472613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.473007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.473037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.473452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.473907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.473937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.474425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.474826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.474855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.475266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.475671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.475701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.476057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.476421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.476451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.476845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.477143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.477183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.477662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.478133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.478164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.478404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.478738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.478767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.479170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.479587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.479617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.959 qpair failed and we were unable to recover it. 00:24:09.959 [2024-04-27 00:58:02.479989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.480348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.959 [2024-04-27 00:58:02.480380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.480868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.481194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.481208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.481520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.481978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.482008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.482411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.482861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.482876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.483182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.483520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.483550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.484021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.484419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.484449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.484796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.485248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.485296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.485803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.486043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.486094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.486505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.486903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.486933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.487277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.487729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.487758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.488237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.488629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.488658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.489050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.489456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.489485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.489853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.490272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.490303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.490667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.491094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.491125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.491533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.491917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.491946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.492375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.492820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.492835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.493069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.493395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.493425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.493803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.494207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.494238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.494643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.495108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.495140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.495544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.495944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.495973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.496376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.496797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.496827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.497235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.497493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.497523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.497943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.498360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.498390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.498751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.499102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.499134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.499462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.499860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.960 [2024-04-27 00:58:02.499889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.960 qpair failed and we were unable to recover it. 00:24:09.960 [2024-04-27 00:58:02.500344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.500564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.500593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.501006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.501277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.501293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.501635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.502023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.502052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.502457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.502844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.502859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.503221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.503667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.503696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.504093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.504429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.504459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.504802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.505183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.505198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.505565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.505920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.505950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.506438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.506794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.506823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.507280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.507681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.507712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.508114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.508525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.508555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.508947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.509353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.509384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.509786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.510162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.510177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.510499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.510820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.510850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.511203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.511683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.511713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.512059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.512404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.512434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.512841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.513296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.513327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.513754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.514101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.514132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.514578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.514975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.515005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.515419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.515763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.515793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.516199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.516611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.516641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.517219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.517650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.517681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.518037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.518442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.518457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.518857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.519224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.519255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.519666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.520013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.520041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.520503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.520845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.520860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.521160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.521544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.521574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.521914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.522813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.522840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.961 qpair failed and we were unable to recover it. 00:24:09.961 [2024-04-27 00:58:02.523243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.961 [2024-04-27 00:58:02.523645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.523674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.524134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.524499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.524528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.524919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.525389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.525421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.525901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.526322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.526353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.526710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.527083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.527115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.527526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.527929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.527959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.528393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.528742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.528772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.529179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.529561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.529575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.529958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.530412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.530442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.530772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.531376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.531392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.531763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.532206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.532223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.532613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.532931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.532960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.533171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.533877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.533900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.534322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.534746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.534776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.535154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.535572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.535602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.535940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.536268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.536300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.536714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.537469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.537494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.537876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.539207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.539235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.539565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.540032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.540062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.540429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.540826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.540841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.541155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.541456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.541470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.541897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.542353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.542368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.542521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.542842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.542856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.543170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.543469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.543483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.543786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.544173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.544188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.544614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.544932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.544947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.545276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.545814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.545828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.546153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.546535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.546550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.962 qpair failed and we were unable to recover it. 00:24:09.962 [2024-04-27 00:58:02.546849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.962 [2024-04-27 00:58:02.547311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.547327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.547711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.548087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.548102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.548415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.548732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.548747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.549110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.549426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.549441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.549744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.550095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.550127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.550533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.550879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.550910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.551324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.551505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.551535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.551866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.552109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.552141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.552485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.552872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.552902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.553372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.553716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.553746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.554207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.554529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.554559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.554949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.555252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.555267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.555590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.555941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.555971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.556150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.556552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.556582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.556982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.557367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.557399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.557722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.558030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.558059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.558667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.559118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.559149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.559519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.559860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.559889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.560236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.560580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.560610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.560960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.561317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.561348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.561694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.562126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.562157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.562393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.562621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.562651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.562979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.563938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.563964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.564366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.565044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.565078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.565466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.565675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.565690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.566014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.566379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.566394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.566820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.567114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.567130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.567435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.567805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.567819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.963 qpair failed and we were unable to recover it. 00:24:09.963 [2024-04-27 00:58:02.568191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.568414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.963 [2024-04-27 00:58:02.568429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.568793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.569102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.569117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.569566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.569870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.569884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1815654 Killed "${NVMF_APP[@]}" "$@" 00:24:09.964 [2024-04-27 00:58:02.570202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.570581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.570595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 00:58:02 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:24:09.964 [2024-04-27 00:58:02.570910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 00:58:02 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:09.964 [2024-04-27 00:58:02.571219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.571233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 00:58:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:09.964 00:58:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:09.964 [2024-04-27 00:58:02.571625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:09.964 [2024-04-27 00:58:02.571938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.571953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.572273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.572586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.572600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.572903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.573284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.573299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.573663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.573965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.573979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.574274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.574633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.574647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.575068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.575452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.575467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.575854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.576219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.576233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.576611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.576969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.576984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.577434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 00:58:02 -- nvmf/common.sh@470 -- # nvmfpid=1816419 00:24:09.964 [2024-04-27 00:58:02.577740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.577755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 00:58:02 -- nvmf/common.sh@471 -- # waitforlisten 1816419 00:24:09.964 [2024-04-27 00:58:02.578030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 00:58:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:09.964 00:58:02 -- common/autotest_common.sh@817 -- # '[' -z 1816419 ']' 00:24:09.964 [2024-04-27 00:58:02.578475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.578491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 00:58:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.964 [2024-04-27 00:58:02.578811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 00:58:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:09.964 00:58:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.964 [2024-04-27 00:58:02.579185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.964 [2024-04-27 00:58:02.579202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 00:58:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:09.964 [2024-04-27 00:58:02.579571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:09.964 [2024-04-27 00:58:02.579874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.579889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.580040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.580637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.580652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.581027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.581397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.581413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.581765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.582153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.582168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.964 qpair failed and we were unable to recover it. 00:24:09.964 [2024-04-27 00:58:02.582524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.964 [2024-04-27 00:58:02.582893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.582908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.583339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.583765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.583779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.584153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.584284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.584298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.584667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.585110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.585125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.585436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.585813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.585828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.586196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.586404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.586419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.586811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.587133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.587148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.587521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.587875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.587890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.588267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.588694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.588709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.589080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.589469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.589483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.589849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.590288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.590302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.590625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.590987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.591000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.591336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.591682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.591695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.592237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.592603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.592617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.592951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.593372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.593387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.593701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.593856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.593871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.594234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.594600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.594614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.594936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.595250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.595265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.595636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.595954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.595968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.596285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.596598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.596612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.597021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.597357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.597371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.597746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.598086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.598101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.598882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.599271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.599288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.599516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.599652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.599667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.599976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.600297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.600312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.600619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.600935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.600950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.601156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.601462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.601477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.601824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.602131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.602147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.602519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.602828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.965 [2024-04-27 00:58:02.602843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.965 qpair failed and we were unable to recover it. 00:24:09.965 [2024-04-27 00:58:02.603146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.603445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.603460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.603908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.604202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.604217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.604600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.604955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.604970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.605286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.605681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.605695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.606085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.606452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.606466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.606846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.607227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.607242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.607666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.608024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.608039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.608362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.608748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.608762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.609131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.609551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.609566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.609925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.610231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.610245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.610550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.610973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.610988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.611366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.611734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.611748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.612190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.612557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.612570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.612938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.613324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.613339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.613772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.614093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.614108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.614562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.614859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.614873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.615266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.615716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.615733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.616179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.616599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.616614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.617010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.617332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.617348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.617714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.617977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.617991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.618354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.618729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.618744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.619129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.619518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.619532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.619957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.620404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.620419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.620792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.621174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.621189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.621657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.621691] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:24:09.966 [2024-04-27 00:58:02.621729] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.966 [2024-04-27 00:58:02.622047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.622060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.622448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.622822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.622836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.623212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.623640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.623655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.624082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.624448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.624463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.966 qpair failed and we were unable to recover it. 00:24:09.966 [2024-04-27 00:58:02.624680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.966 [2024-04-27 00:58:02.625107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.625122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.625487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.625935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.625950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.626279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.626650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.626664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.627044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.627356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.627372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.627683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.628057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.628082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.628392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.628764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.628779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.629209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.629662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.629677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.630080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.630302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.630317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.630713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.631022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.631037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.631492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.631783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.631797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.632109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.632485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.632499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.632924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.633301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.633316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.633703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.634101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.634116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.634427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.634798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.634811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.635278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.635664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.635678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.636043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.636341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.636356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.636741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.637130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.637145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.637545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.637717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.637729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.638103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.638547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.638561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.638992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.639295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.639308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.639634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.640089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.640103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.640382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.640748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.640761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:09.967 [2024-04-27 00:58:02.641285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.641681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.967 [2024-04-27 00:58:02.641694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:09.967 qpair failed and we were unable to recover it. 00:24:10.235 [2024-04-27 00:58:02.642077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.235 [2024-04-27 00:58:02.642413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.235 [2024-04-27 00:58:02.642426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.235 qpair failed and we were unable to recover it. 00:24:10.235 [2024-04-27 00:58:02.642785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.235 [2024-04-27 00:58:02.643146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.643160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.643541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.643914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.643928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.644249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.644390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.644403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.644826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.645246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.645260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.645640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.646037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.646049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.646474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.646916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.646929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.647250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.647674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.647687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.647994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.648422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.648436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.648750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.649054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.649067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.649452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.649766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.649780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.236 [2024-04-27 00:58:02.650155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.650598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.650611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.651079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.651458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.651471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.651751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.652076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.652089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.652483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.652800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.652815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.653124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.653614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.653628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.653932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.654315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.654329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.654689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.655137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.655151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.655509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.655870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.655883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.656270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.656643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.656656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.657054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.657379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.657392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.657698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.657994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.658007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.658364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.658731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.658744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.659169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.659473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.659487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.659860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.660303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.660316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.660644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.661032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.661045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.661202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.661623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.661636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.661939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.662334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.662348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.662668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.663092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.663106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.236 [2024-04-27 00:58:02.663463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.664077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.236 [2024-04-27 00:58:02.664091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.236 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.664416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.664788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.664801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.665166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.665469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.665483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.665844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.666284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.666298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.666602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.666902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.666915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.667224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.667533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.667546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.667916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.668212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.668226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.668591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.668894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.668908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.669450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.669941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.669954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.670338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.670700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.670713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.671110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.671478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.671491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.671805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.672107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.672121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.672545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.672840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.672853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.673301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.673671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.673684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.674059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.674463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.674476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.674781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.675205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.675219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.675404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.675761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.675773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.676081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.676392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.676405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.676792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.677164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.677178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.677573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.677889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.677902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.678193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.678502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.678515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.678880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.679257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.679271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.679579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.680003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.680015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.680380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.680680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.680693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.681063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.681205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.681218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.681519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.681826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.681839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.682169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.682312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.682325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.682630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.683004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.683017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.683388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.683755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.683768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.237 qpair failed and we were unable to recover it. 00:24:10.237 [2024-04-27 00:58:02.684068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.237 [2024-04-27 00:58:02.684379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.684392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.684696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.685012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.685025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.685376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.685765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.685778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.686156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.686545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.686558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.686926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.687292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.687306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.687618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.687904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.687918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.688305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.688674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.688687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.688990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.689355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.689368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.689623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.689978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.689991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.690393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.690698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.690711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.691013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.691339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.691353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.691655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.692037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.692050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.692366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.692722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.692735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.693121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.693481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.693494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.693814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.694137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.694150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.694442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.694808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.694821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.695232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.695557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.695557] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.238 [2024-04-27 00:58:02.695571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.695891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.696198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.696212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.696591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.696951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.696965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.697333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.697646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.697661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.697954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.698274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.698288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.698600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.698916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.698930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.699303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.699737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.699750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.699962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.700110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.700123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.700440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.700751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.700765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.701080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.701369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.701383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.701692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.702088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.702103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.702482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.702785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.702799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.238 [2024-04-27 00:58:02.703161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.703581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.238 [2024-04-27 00:58:02.703595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.238 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.703967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.704166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.704181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.704484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.704793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.704807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.705190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.705506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.705520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.705890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.706341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.706354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.706662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.707036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.707049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.707207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.707565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.707578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.707961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.708265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.708280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.708574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.708869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.708882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.709249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.709553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.709567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.709950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.710299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.710313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.710684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.710997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.711011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.711380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.711693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.711706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.712003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.712321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.712335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.712626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.712940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.712953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.713315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.713738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.713752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.714173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.714544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.714558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.714928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.715261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.715275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.715562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.715863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.715875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.716236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.716684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.716698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.717012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.717460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.717474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.717623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.717984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.717996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.718365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.718674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.718687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.719089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.719461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.719474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.719844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.720172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.720186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.720611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.721020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.721035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.721411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.721772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.721785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.722157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.722542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.722565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.722717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.723077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.239 [2024-04-27 00:58:02.723091] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.239 qpair failed and we were unable to recover it. 00:24:10.239 [2024-04-27 00:58:02.723465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.723822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.723835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.724215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.724514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.724527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.724889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.725255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.725269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.725532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.725904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.725917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.726215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.726491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.726504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.726928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.727147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.727161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.727373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.727758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.727772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.728167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.728485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.728498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.728874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.729327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.729341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.729735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.730104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.730118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.730418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.730788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.730801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.731179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.731600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.731613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.731986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.732408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.732427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.732808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.733193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.733210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.733576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.733952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.733968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.734347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.734752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.734767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.735081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.735460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.735473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.735782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.736163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.736178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.736498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.736809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.736823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.737197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.737511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.737525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.737923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.738233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.738247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.738560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.738940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.738953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.240 qpair failed and we were unable to recover it. 00:24:10.240 [2024-04-27 00:58:02.739318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.739683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.240 [2024-04-27 00:58:02.739697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.740005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.740377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.740392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.740723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.741085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.741099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.741494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.741858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.741872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.742186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.742497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.742510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.742813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.743122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.743135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.743514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.743875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.743888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.744265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.744628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.744641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.744956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.745276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.745293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.745536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.745899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.745912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.746273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.746578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.746591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.746887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.747250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.747264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.747631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.747948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.747961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.748349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.748770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.748783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.749100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.749420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.749433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.749854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.750166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.750180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.750501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.750947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.750960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.751340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.751639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.751652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.752014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.752380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.752397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.752708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.753027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.753040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.753460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.753756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.753769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.754128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.754514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.754527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.754885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.755257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.755271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.755645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.756028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.756041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.756361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.756671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.756683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.757052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.757360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.757373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.757684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.758046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.758059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.758443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.758811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.758824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.759134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.759442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.759458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.241 qpair failed and we were unable to recover it. 00:24:10.241 [2024-04-27 00:58:02.759827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.241 [2024-04-27 00:58:02.760198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.760211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.760574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.760935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.760949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.761328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.761690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.761703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.762132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.762510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.762523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.762897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.763088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.763102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.763473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.763789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.763802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.764113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.764502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.764516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.764881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.765236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.765249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.765600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.766021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.766034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.766338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.766640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.766656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.766958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.767266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.767279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.767648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.768009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.768022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.768452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.768816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.768829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.769194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.769501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.769514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.769882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.770247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.770261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.770632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.771027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.771040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.771349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.771642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.771657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.772046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.772120] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.242 [2024-04-27 00:58:02.772149] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.242 [2024-04-27 00:58:02.772156] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.242 [2024-04-27 00:58:02.772163] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.242 [2024-04-27 00:58:02.772168] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.242 [2024-04-27 00:58:02.772275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:10.242 [2024-04-27 00:58:02.772515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.772532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.772505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:10.242 [2024-04-27 00:58:02.772588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:10.242 [2024-04-27 00:58:02.772589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:10.242 [2024-04-27 00:58:02.772915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.773344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.773358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.773766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.774084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.774097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.774520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.774824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.774837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.775320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.775701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.775714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.776080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.776380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.776393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.776765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.777136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.777150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.777575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.777951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.777964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.778326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.778616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.778630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.242 qpair failed and we were unable to recover it. 00:24:10.242 [2024-04-27 00:58:02.778780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.779155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.242 [2024-04-27 00:58:02.779170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.779536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.779850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.779864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.780225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.780582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.780595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.780826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.781251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.781267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.781599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.782054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.782075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.782394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.782828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.782844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.783378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.783745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.783761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.784067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.784375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.784389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.784703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.785083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.785098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.785489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.785876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.785891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.786288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.786590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.786604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.786907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.787240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.787254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.787627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.788146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.788161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.788592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.788984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.788998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.789374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.789801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.789814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.790203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.790508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.790522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.790830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.791198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.791212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.791514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.791883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.791897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.792318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.792716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.792730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.793117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.793508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.793523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.793832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.794127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.794143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.794447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.794759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.794778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.795104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.795429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.795443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.795809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.795952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.795966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.796325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.796635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.796649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.797039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.797360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.797375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.797738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.798112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.798127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.798502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.798814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.798828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.799129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.799498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.799512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.243 qpair failed and we were unable to recover it. 00:24:10.243 [2024-04-27 00:58:02.799824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.243 [2024-04-27 00:58:02.800152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.800166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.800527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.800884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.800898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.801210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.801582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.801600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.801906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.802218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.802233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.802549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.802991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.803005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.803365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.803673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.803686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.804063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.804457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.804471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.804918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.805290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.805305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.805693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.806007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.806020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.806329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.806754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.806768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.806960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.807329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.807343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.807786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.808082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.808097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.808525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.808947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.808967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.809332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.809711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.809724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.810097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.810472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.810485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.810854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.811229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.811243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.811601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.811923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.811936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.812260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.812613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.812627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.813081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.813383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.813396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.813711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.814011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.814025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.814394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.814701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.814714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.815096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.815409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.815422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.815786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.816155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.816174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.816603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.816971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.816986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.817351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.817710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.817725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.818036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.818399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.818413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.818724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.819038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.819052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.819201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.819565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.819579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.819727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.819913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.244 [2024-04-27 00:58:02.819926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.244 qpair failed and we were unable to recover it. 00:24:10.244 [2024-04-27 00:58:02.820165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.820475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.820489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.820854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.821174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.821188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.821559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.821930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.821944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.822400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.822707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.822721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.823149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.823448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.823461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.823786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.824211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.824226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.824535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.824854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.824867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.825259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.825557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.825571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.825937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.826309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.826323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.826511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.826799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.826813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.827112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.827469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.827483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.827857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.828229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.828243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.828618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.828979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.828993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.829362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.829936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.829949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.830257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.830640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.830654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.830958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.831266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.831280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.831665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.831808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.831820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.832143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.832545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.832558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.832979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.833298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.833311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.833621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.833925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.833938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.834311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.834615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.834629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.834993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.835287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.835300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.835604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.835907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.835920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.836291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.836577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.836590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.245 qpair failed and we were unable to recover it. 00:24:10.245 [2024-04-27 00:58:02.836906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.245 [2024-04-27 00:58:02.837275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.837289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.837615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.837925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.837938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.838244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.838577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.838590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.838889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.839339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.839352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.839662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.839964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.839976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.840280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.840847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.840859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.841480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.841796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.841809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.842176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.842536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.842549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.842903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.843284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.843297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.843666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.843963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.843976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.844286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.844655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.844668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.844974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.845332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.845346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.845662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.846035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.846048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.846470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.846782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.846795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.847026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.847183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.847196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.847495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.847857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.847870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.848181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.848475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.848488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.848784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.849100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.849114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.849413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.849710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.849723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.850081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.850392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.850405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.850704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.851001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.851014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.851333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.851705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.851718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.852021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.852320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.852334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.852629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.852929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.852942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.853331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.853691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.853704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.854077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.854609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.854622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.854922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.855231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.855245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.855560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.855851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.855864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.856179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.856543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.246 [2024-04-27 00:58:02.856557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.246 qpair failed and we were unable to recover it. 00:24:10.246 [2024-04-27 00:58:02.856852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.857174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.857187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.857489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.857850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.857864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.858178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.858501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.858514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.858874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.859241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.859256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.859598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.859919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.859932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.860241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.860541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.860555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.860872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.861192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.861205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.861506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.861799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.861811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.862119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.862487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.862501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.862878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.863172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.863186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.863382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.863811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.863824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.864277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.864576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.864589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.864906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.865220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.865234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.865563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.865919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.865944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.866261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.866652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.866675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.867078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.867407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.867428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.867644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.867956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.867969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.868281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.868672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.868694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.869090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.869407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.869422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.869758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.869917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.869934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.870233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.870592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.870606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.870908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.871263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.871277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.871582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.871956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.871969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.872392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.872717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.872730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.873113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.873406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.873420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.873746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.874048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.874060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.874381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.874743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.874756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.875051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.875357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.875370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.247 [2024-04-27 00:58:02.875929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.876291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.247 [2024-04-27 00:58:02.876304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.247 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.876620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.876982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.876995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.877364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.877681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.877694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.878000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.878364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.878378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.878727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.879090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.879104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.879407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.879703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.879716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.880038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.880342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.880356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.880753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.881049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.881062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.881388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.881755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.881768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.882076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.882382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.882395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.882774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.883141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.883154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.883475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.883817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.883830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.884134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.884368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.884381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.884536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.884845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.884859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.885167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.885477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.885490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.886043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.886353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.886367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.886686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.887051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.887064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.887375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.887669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.887682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.888036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.888409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.888423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.888729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.889041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.889053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.889353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.889721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.889734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.890030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.890370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.890383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.890684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.891107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.891121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.891488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.891858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.891872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.892169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.892538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.892551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.892856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.893149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.893163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.893526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.893815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.893828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.894141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.894586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.894599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.248 [2024-04-27 00:58:02.894909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.895225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.248 [2024-04-27 00:58:02.895238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.248 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.895554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.895914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.895927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.896230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.896543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.896555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.896933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.897220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.897233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.897664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.897963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.897976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.898340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.898691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.898706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.898846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.899145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.899158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.899579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.899937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.899949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.900250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.900630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.900643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.900903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.901280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.901293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.901658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.901961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.901973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.902401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.902820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.902832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.903124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.903503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.903516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.903968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.904388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.904402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.904772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.905159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.905172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.905542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.905912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.905927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.906295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.906743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.906756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.907163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.907606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.907619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.907992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.908378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.908391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.908695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.909050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.909063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.909436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.909820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.909833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.910190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.910607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.910620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.910981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.911346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.911359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.249 qpair failed and we were unable to recover it. 00:24:10.249 [2024-04-27 00:58:02.911794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.249 [2024-04-27 00:58:02.912152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.912166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.912457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.912819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.912831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.913195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.913640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.913658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.913961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.914403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.914417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.914774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.915144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.915157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.915454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.915817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.915830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.916203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.916575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.916588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.917008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.917378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.917391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.917693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.918077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.918090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.918514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.918875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.918889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.919332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.919699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.919712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.920088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.920290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.920303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.920668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.921053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.921073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.921368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.921751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.250 [2024-04-27 00:58:02.921764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.250 qpair failed and we were unable to recover it. 00:24:10.250 [2024-04-27 00:58:02.922135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.922516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.922529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.922952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.923315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.923329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.923705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.924100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.924113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.924557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.924856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.924869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.925311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.925689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.925701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.925906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.926310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.926324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.926685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.927052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.927065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.927458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.927880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.927893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.928100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.928548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.928561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.929035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.929479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.929492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.929866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.930287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.930316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.930702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.930995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.931008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.931484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.931794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.931807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.932005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.932372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.932386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.932773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.933131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.933144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.512 qpair failed and we were unable to recover it. 00:24:10.512 [2024-04-27 00:58:02.933591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.512 [2024-04-27 00:58:02.933965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.933978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.934339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.934776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.934788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.935162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.935532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.935544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.935953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.936371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.936384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.936834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.937205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.937218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.937595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.938026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.938039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.938413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.938834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.938847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.939295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.939448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.939461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.939833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.940209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.940223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.940650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.941092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.941106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.941483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.941943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.941956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.942378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.942754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.942767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.943209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.943631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.943644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.944065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.944396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.944409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.944785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.945224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.945238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.945628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.946000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.946013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.946442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.946821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.946834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.947135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.947530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.947543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.948009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.948460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.948473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.948840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.949129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.949143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.949568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.949924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.949937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.950356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.950721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.950734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.951126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.951570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.951583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.951963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.952407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.952420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.952799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.953223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.953236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.953615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.953815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.953828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.954202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.954598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.954611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.954935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.955348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.955362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.955685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.956127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.956140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.956580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.957022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.957034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.957408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.957778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.957791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.958255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.958717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.958730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.959171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.959593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.959605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.513 [2024-04-27 00:58:02.960046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.960354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.513 [2024-04-27 00:58:02.960368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.513 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.960695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.961046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.961059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.961511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.961888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.961901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.962345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.962801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.962814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.963138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.963497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.963510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.963814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.964234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.964247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.964742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.965126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.965139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.965497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.965940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.965952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.966340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.966761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.966774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.967194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.967494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.967507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.967926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.968292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.968305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.968764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.969065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.969083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.969477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.969853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.969866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.970242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.970599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.970611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.970903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.971319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.971332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.971814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.972261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.972275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.972728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.973145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.973158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.973476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.973916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.973929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.974375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.974751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.974764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.975154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.975608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.975620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.976044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.976485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.976498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.976883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.977337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.977351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.977643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.978081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.978095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.978466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.978778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.978791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.979146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.979510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.979523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.979967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.980412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.980426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.980861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.981079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.981092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.981461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.981902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.981915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.982274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.982651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.982664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.983091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.983530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.983543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.983969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.984410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.984424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.984799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.985218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.985232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.985599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.985969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.985982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.986294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.986713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.986726] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.514 qpair failed and we were unable to recover it. 00:24:10.514 [2024-04-27 00:58:02.987171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.514 [2024-04-27 00:58:02.987491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.987504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.987951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.988334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.988347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.988732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.989102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.989115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.989571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.989940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.989952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.990330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.990698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.990711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.991153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.991573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.991586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.992030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.992400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.992413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.992858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.993248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.993261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.993650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.993978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.993991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.994295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.994766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.994779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.995087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.995448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.995461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.995836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.996305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.996318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.996738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.997176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.997189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.997548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.997931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.997944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.998390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.998787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.998800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.999173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.999554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:02.999567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:02.999956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.000326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.000339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.000746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.001150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.001164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.001585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.002027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.002040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.002415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.002784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.002797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.003262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.003702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.003715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.003984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.004415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.004429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.004856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.005143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.005156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.005523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.005884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.005897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.006342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.006714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.006727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.007104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.007486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.007499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.007868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.008305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.008319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.008674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.009123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.009137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.009453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.009806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.009819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.010202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.010573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.010586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.010959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.011311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.011324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.011721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.012166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.012180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.012482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.012869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.012882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.515 qpair failed and we were unable to recover it. 00:24:10.515 [2024-04-27 00:58:03.013319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.515 [2024-04-27 00:58:03.013692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.013705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.014084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.014459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.014472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.014850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.015215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.015229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.015538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.015983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.015996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.016365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.016683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.016696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.017140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.017515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.017528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.017829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.018299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.018313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.018629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.019055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.019068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.019383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.019764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.019777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.020136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.020514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.020527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.020920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.021294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.021308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.021756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.022130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.022144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.022564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.022980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.022993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.023371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.023738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.023751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.024197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.024616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.024632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.025054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.025415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.025428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.025862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.026303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.026316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.026711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.027154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.027168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.027607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.027983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.027996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.028402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.028780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.028793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.029188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.029610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.029623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.029992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.030435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.030449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.030821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.031188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.031203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.031692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.032057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.032075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.032470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.032857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.032875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.033318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.033762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.033775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.034162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.034550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.034565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.035007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.035308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.035323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.516 qpair failed and we were unable to recover it. 00:24:10.516 [2024-04-27 00:58:03.035694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.516 [2024-04-27 00:58:03.036113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.036128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.036497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.036943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.036956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.037316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.037762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.037775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.038154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.038517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.038530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.038665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.039023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.039036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.039392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.039760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.039773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.040245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.040613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.040630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.040987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.041365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.041378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.041825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.042126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.042140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.042468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.042783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.042796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.043220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.043604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.043617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.043997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.044292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.044306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.044773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.045063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.045082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.045457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.045755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.045768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.045975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.046415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.046429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.046818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.047190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.047204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.047563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.047927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.047943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.048257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.048633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.048646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.048961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.049263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.049277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.049657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.049968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.049982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.050424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.050800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.050813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.051170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.051470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.051483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.051798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.052171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.052184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.052546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.052917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.052929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.053312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.053697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.053710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.054133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.054451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.054465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.054773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.055140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.055154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.055577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.056000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.056013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.056434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.056888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.056902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.057202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.057611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.057624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.058080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.058381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.058394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.058764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.059182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.059196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.059574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.059969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.059982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.517 [2024-04-27 00:58:03.060381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.060693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.517 [2024-04-27 00:58:03.060706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.517 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.061098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.061413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.061426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.061849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.062293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.062307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.062619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.063057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.063077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.063449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.063758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.063771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.064128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.064486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.064499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.064815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.065166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.065180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.065593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.066025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.066038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.066483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.066854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.066867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.067185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.067636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.067649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.067789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.068163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.068177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.068542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.068835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.068848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.069271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.069710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.069723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.070093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.070546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.070559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.070986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.071350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.071363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.071720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.072007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.072019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.072174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.072488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.072501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.072893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.073337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.073351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.073751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.074172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.074185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.074607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.074919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.074932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.075375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.075668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.075681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.076103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.076478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.076492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.076801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.077153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.077166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.077566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.077878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.077891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.078146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.078519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.078532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.078901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.079317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.079331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.079705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.080011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.080024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.080447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.080819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.080831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.081281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.081902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.081915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.082283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.082647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.082660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.083016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.083371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.083385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.083687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.083982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.083995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.084366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.084763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.084777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.085224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.085674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.085687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.518 qpair failed and we were unable to recover it. 00:24:10.518 [2024-04-27 00:58:03.086256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.518 [2024-04-27 00:58:03.086600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.086613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.087064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.087462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.087475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.087939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.088361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.088375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.088760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.089183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.089196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.089560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.089984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.089997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.090444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.090804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.090817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.091221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.091655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.091668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.092039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.092403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.092417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.092799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.093259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.093272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.093718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.094143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.094157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.094525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.094907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.094920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.095285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.095727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.095740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.096050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.096484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.096497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.096886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.097275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.097288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.097863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.098416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.098430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.098800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.099164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.099178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.099637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.100000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.100013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.100387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.100693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.100706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.101066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.101442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.101456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.101848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.102221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.102235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.102639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.103059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.103076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.103474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.103905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.103918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.104287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.104705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.104718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.105153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.105458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.105471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.105776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.106093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.106107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.106480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.106781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.106794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.107102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.107544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.107557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.108001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.108362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.108375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.108769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.109027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.109040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.109356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.109912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.109925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.110235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.110680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.110693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.111057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.111444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.111458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.111832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.112181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.112195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.519 qpair failed and we were unable to recover it. 00:24:10.519 [2024-04-27 00:58:03.112466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.112821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.519 [2024-04-27 00:58:03.112834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.113199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.113561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.113574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.113939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.114389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.114403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.114846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.115172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.115186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.115606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.116020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.116033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.116398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.116706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.116719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.117317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.117613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.117626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.118010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.118476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.118490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.118849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.119215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.119228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.119662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.119964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.119977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.120424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.120789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.120802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.121101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.121465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.121478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.121779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.122088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.122102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.122524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.122993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.123005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.123328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.123805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.123818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.124287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.124706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.124719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.125141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.125503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.125516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.125817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.126201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.126215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.126657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.127049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.127062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.127443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.127813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.127826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.128234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.128681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.128694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.129068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.129374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.129387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.129810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.130274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.130287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.130674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.130980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.130993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.131441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.131807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.131820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.132194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.132562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.132574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.133046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.133431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.133445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.133840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.134161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.134175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.134619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.135063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.135081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.520 qpair failed and we were unable to recover it. 00:24:10.520 [2024-04-27 00:58:03.135463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.135894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.520 [2024-04-27 00:58:03.135907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.136215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.136573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.136586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.136954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.137325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.137338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.137787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.138155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.138169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.138612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.139055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.139068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.139449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.139816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.139829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.140253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.140628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.140641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.141008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.141457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.141470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.141842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.142221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.142235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.142689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.143056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.143068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.143476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.143871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.143884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.144258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.144704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.144717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.145142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.145504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.145517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.145662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.146026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.146039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.146416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.146860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.146873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.147315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.147679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.147692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.148083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.148528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.148541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.148753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.149193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.149207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.149506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.149806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.149824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.150194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.150614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.150626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.151003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.151458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.151471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.151669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.151964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.151976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.152291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.152685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.152698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.153167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.153553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.153565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.154010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.154152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.154166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.154611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.154984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.154998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.155382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.155760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.155773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.156195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.156613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.156626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.157083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.157523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.157539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.157963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.158334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.158348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.158768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.159153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.159167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.159586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.160034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.160047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.521 [2024-04-27 00:58:03.160497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.161049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.521 [2024-04-27 00:58:03.161062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.521 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.161501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.161946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.161959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.162404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.162846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.162859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.163283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.163706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.163719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.164101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.164411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.164424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.164863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.165074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.165087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.165536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.165906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.165921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.166346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.166786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.166799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.166990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.167388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.167401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.167765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.168147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.168160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.168470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.168830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.168843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.169213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.169579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.169592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.170064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.170454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.170467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.170910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.171113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.171127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.171501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.171939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.171952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.172330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.172697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.172710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.173038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.173411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.173427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.173898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.174267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.174280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.174654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.175032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.175045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.175487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.175857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.175870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.176299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.176607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.176620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.176985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.177435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.177449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.177760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.178132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.178145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.178504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.178940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.178953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.179312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.179677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.179690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.180134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.180576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.180589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.181031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.181394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.181407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.181864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.182315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.182328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.182714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.183087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.183101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.183566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.184010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.184024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.184280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.184420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.184433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.184855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.185304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.185317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.522 qpair failed and we were unable to recover it. 00:24:10.522 [2024-04-27 00:58:03.185749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.522 [2024-04-27 00:58:03.186201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.186214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.186657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.187032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.187045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.187504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.187927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.187939] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.188392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.188685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.188698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.189024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.189465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.189478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.189808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.190195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.190209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.190579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.190765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.190778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.191202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.191555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.191568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.191795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.192235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.192248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.192671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.193064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.193082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.193505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.193818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.193831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.194188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.194554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.194568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.195026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.195410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.195424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.195796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.196164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.196177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.196566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.196868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.196880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.197333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.197777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.197790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.198012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.198213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.198227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.198615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.199080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.199094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.199463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.199884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.199898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.200254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.200668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.200682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.201068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.201434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.201447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.201872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.202137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.202151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.523 [2024-04-27 00:58:03.202597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.203038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.523 [2024-04-27 00:58:03.203051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.523 qpair failed and we were unable to recover it. 00:24:10.790 [2024-04-27 00:58:03.203513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.790 [2024-04-27 00:58:03.203867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.790 [2024-04-27 00:58:03.203880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.790 qpair failed and we were unable to recover it. 00:24:10.790 [2024-04-27 00:58:03.204287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.790 [2024-04-27 00:58:03.204661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.790 [2024-04-27 00:58:03.204674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.790 qpair failed and we were unable to recover it. 00:24:10.790 [2024-04-27 00:58:03.205099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.790 [2024-04-27 00:58:03.205471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.790 [2024-04-27 00:58:03.205484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.790 qpair failed and we were unable to recover it. 00:24:10.790 [2024-04-27 00:58:03.205955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.790 [2024-04-27 00:58:03.206374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.790 [2024-04-27 00:58:03.206387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.790 qpair failed and we were unable to recover it. 00:24:10.790 [2024-04-27 00:58:03.206833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.207277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.207291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.207736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.208127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.208141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.208582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.208944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.208957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.209337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.209804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.209817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.210212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.210651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.210665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.211105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.211555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.211568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.212056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.212513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.212527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.212948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.213391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.213405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.213828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.214272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.214285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.214590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.215022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.215035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.215357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.215721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.215734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.216130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.216573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.216586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.216952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.217420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.217434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.217825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.218201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.218214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.218636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.219056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.219073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.219434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.219852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.219865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.220310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.220727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.220740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.221181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.221550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.221562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.222010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.222370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.222383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.222807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.223177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.223191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.223566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.224012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.224025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.224389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.224851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.224864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.225310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.225684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.225697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.226146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.226464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.226477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.226918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.227364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.227377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.227820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.228241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.228255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.228699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.229142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.229156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.229607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.230026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.230039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.791 qpair failed and we were unable to recover it. 00:24:10.791 [2024-04-27 00:58:03.230450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.791 [2024-04-27 00:58:03.230876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.230889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.231247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.231716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.231729] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.232086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.232438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.232451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.232870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.233310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.233323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.233766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.234079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.234093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.234543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.234940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.234953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.235266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.235688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.235701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.236079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.236463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.236476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.236848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.237267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.237281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.237700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.238064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.238081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.238508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.238935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.238948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.239371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.239810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.239824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.240209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.240674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.240687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.241155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.241548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.241561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.242004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.242331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.242354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.242798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.243242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.243255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.243697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.244117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.244130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.244579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.244972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.244985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.245352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.245819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.245832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.246198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.246642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.246655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.247101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.247551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.247565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.247986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.248427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.248441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.248888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.249299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.249312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.249740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.250104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.250118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.250591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.251061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.251082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.251538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.251968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.251981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.252374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.252729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.252742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.253183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.253628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.253641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.792 [2024-04-27 00:58:03.254083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.254477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.792 [2024-04-27 00:58:03.254490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.792 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.254936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.255319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.255333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.255716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.256089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.256103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.256551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.256920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.256933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.257403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.257800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.257814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.258188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.258563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.258577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.259006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.259445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.259459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.259912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.260522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.260536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.260982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.261402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.261416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.261860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.262175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.262188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.262636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.263052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.263066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.263437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.263901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.263915] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.264340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.264789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.264802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.265202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.265521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.265535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.265909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.266355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.266368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.266815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.267224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.267238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.267665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.268107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.268121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.268514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.268864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.268877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.269269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.269714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.269727] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.270174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.270616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.270629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.271064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.271459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.271473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.272087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.272542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.272555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.272925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.273366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.273385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.273768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.274140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.274153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.274618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.275059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.275077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.275402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.278520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.278536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.278912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.279356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.279370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.279695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.280135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.280149] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.280507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.280969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.793 [2024-04-27 00:58:03.280982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.793 qpair failed and we were unable to recover it. 00:24:10.793 [2024-04-27 00:58:03.281376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.281748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.281761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.282154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.282538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.282551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.282970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.283411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.283425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.283730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.284138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.284155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.284587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.284986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.284999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.285429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.285794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.285808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.286253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.286642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.286655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.287023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.287380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.287394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.287791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.288093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.288107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.288434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.288875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.288888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.289157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.289543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.289556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.290010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.290433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.290448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.290852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.291226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.291239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.291562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.291920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.291936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.292309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.292752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.292765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.293208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.293581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.293594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.293963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.294277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.294290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.294658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.294964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.294977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.295338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.295787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.295800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.296176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.296623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.296637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.297087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.297441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.297454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.297900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.298274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.298288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.298709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.299152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.299166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.299551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.299852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.299868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.300314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.300748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.300762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.301208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.301631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.301645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.302081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.302448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.302461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.302851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.303223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.303237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.794 qpair failed and we were unable to recover it. 00:24:10.794 [2024-04-27 00:58:03.303611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.794 [2024-04-27 00:58:03.303977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.303990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.795 qpair failed and we were unable to recover it. 00:24:10.795 [2024-04-27 00:58:03.304344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.304718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.304731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.795 qpair failed and we were unable to recover it. 00:24:10.795 [2024-04-27 00:58:03.305130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.305548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.305561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.795 qpair failed and we were unable to recover it. 00:24:10.795 [2024-04-27 00:58:03.305955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.306318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.306332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.795 qpair failed and we were unable to recover it. 00:24:10.795 [2024-04-27 00:58:03.306782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.307147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.307160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.795 qpair failed and we were unable to recover it. 00:24:10.795 [2024-04-27 00:58:03.307533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.307906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.307920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.795 qpair failed and we were unable to recover it. 00:24:10.795 [2024-04-27 00:58:03.308356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.308736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.308749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.795 qpair failed and we were unable to recover it. 00:24:10.795 [2024-04-27 00:58:03.309126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.309575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.309588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.795 qpair failed and we were unable to recover it. 00:24:10.795 [2024-04-27 00:58:03.310052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.310410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.310424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.795 qpair failed and we were unable to recover it. 00:24:10.795 [2024-04-27 00:58:03.310705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.311125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.311138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.795 qpair failed and we were unable to recover it. 00:24:10.795 [2024-04-27 00:58:03.311584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.795 [2024-04-27 00:58:03.311884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.311898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.312206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.312637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.312650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.313076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.313502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.313516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.313902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.314275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.314289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.314731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.315175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.315188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.315642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.316064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.316081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.316472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.316933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.316946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.317313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.317711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.317724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.318169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.318538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.318551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.319004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.319467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.319481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.319903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.320267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.320280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.320595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.320894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.320907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.321289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.321653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.321666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.322111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.322555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.322568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.322924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.323198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.323212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.323572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.323989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.324003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.324435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.324898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.324911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.325311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.325633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.325647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.326025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.326226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.326240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.326614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.327034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.327047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.327445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.327832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.327845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.328233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.328701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.328714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.329029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.329302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.796 [2024-04-27 00:58:03.329315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.796 qpair failed and we were unable to recover it. 00:24:10.796 [2024-04-27 00:58:03.329760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.330123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.330137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.330603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.331055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.331068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.331471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.331835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.331848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.332470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.332938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.332951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.333405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.333838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.333851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.334247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.334677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.334690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.335065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.335491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.335505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.335999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.336360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.336374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.336743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.337096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.337109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.337385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.337752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.337766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.338160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.338463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.338477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.338839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.339152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.339165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.339563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.339978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.339991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.340465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.340860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.340873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.341238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.341618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.341632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.342062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.342524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.342537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.342908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.343278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.343292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.343735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.344102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.344115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.344498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.344866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.344879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.345309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.345624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.345637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.346118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.346562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.346575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.347019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.347405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.347419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.347876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.348195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.348209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.348576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.349029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.349041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.349193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.349664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.349678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.350100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.350480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.350494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.350853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.351279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.351293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.351732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.352087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.352100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.797 [2024-04-27 00:58:03.352472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.352951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.797 [2024-04-27 00:58:03.352964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.797 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.353285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.353484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.353497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.353957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.354319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.354333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.354651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.355069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.355093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.355537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.355988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.356001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.356367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.356723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.356735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.357096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.357518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.357532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.357851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.358215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.358228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.358595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.359033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.359047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.359450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.359840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.359853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.360241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.360664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.360677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.361116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.361504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.361517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.361906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.362258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.362272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.362636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.363005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.363018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.363442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.364067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.364088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.364534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.364842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.364855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.365242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.365697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.365710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.366179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.366549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.366563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.366764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.367147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.367161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.367530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.367949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.367963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.368329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.368768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.368781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.369200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.369573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.369587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.369962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.370279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.370293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.370715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.371164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.371178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.371490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.371944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.371958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.372277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.372720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.372734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.373110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.373398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.373412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.373801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.374223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.374237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.374566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.375116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.375130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.798 qpair failed and we were unable to recover it. 00:24:10.798 [2024-04-27 00:58:03.375578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.798 [2024-04-27 00:58:03.376000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.376013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.376477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.376788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.376802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.377171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.377543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.377556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.377980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.378406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.378420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.378792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.379176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.379191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.379637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.379994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.380007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.380323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.380687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.380700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.381002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.381370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.381385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.381740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.382120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.382133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.382509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.382899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.382912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.383361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.383685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.383699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.384074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.384208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.384222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.384596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.384961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.384975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.385354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.385773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.385787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.386167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.386480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.386494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.386808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.387008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.387022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.387393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.387754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.387768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.388135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.388515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.388529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.388905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.389281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.389295] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.389742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.390113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.390127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.390548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.390943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.390956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.391406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.391771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.391784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.392154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.392516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.392530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.392990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.393358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.393373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.393747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.394190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.394204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.394583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.394951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.394965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.395424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.395858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.395871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.396314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.396679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.396693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.799 [2024-04-27 00:58:03.397132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.397551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.799 [2024-04-27 00:58:03.397564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.799 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.397933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.398323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.398336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.398708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.399175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.399190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.399580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.400006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.400019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.400389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.400765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.400779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.401229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.401542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.401556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.401977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.402358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.402373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.402816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.403206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.403220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.403421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.403783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.403799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.404218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.404594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.404607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.405053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.405501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.405514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.405880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.406275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.406289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.406591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.407020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.407035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.407406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.407852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.407865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.408182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.408550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.408563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.408925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.409345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.409360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.409715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.409931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.409945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.410388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.410765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.410779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.411175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.411630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.411646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.412006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.412369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.412383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.412742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.413136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.413150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.413603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.413973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.413987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.414142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.414521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.414534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.414905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.415223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.415237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.800 qpair failed and we were unable to recover it. 00:24:10.800 [2024-04-27 00:58:03.415659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.800 [2024-04-27 00:58:03.416014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.416027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.416451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.416805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.416819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.417142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.417459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.417472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.417831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.418212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.418226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.418597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.419016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.419033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.419405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.419766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.419779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.420197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.420644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.420657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.421108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.421530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.421543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.421935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.422357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.422371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.422815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.423257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.423271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.423646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.424094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.424107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.424549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.424992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.425004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.425448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.425871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.425884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.426283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.426720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.426733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.427102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.427473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.427488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.427965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.428454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.428468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.428946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.429316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.429329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.429777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.430144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.430158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.430625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.431078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.431092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.431478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.431861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.431874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.432319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.432711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.432724] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.433168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.433629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.433642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.434110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.434480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.434493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.434961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.435405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.435418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.435867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.436305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.436318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.436759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.437197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.437210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.801 [2024-04-27 00:58:03.437594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.438041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.801 [2024-04-27 00:58:03.438054] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.801 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.438435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.438865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.438878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.439299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.439728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.439741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.440147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.440579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.440592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.441014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.441432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.441446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.441896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.442338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.442352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.442723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.443170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.443183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.443541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.443913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.443926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.444208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.444661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.444674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.445126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.445546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.445559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.445983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.446351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.446365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.446733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.447156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.447170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.447614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.448080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.448093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.448536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.448908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.448921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.449345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.449791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.449803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.450175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.450497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.450510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.450940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.451385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.451399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.451850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.452236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.452250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.452705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 00:58:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:10.802 00:58:03 -- common/autotest_common.sh@850 -- # return 0 00:24:10.802 [2024-04-27 00:58:03.453146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.453163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 00:58:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:10.802 [2024-04-27 00:58:03.453520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 00:58:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:10.802 [2024-04-27 00:58:03.453924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:10.802 [2024-04-27 00:58:03.453938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.454378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.454803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.454816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.455214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.455654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.455667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.456114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.456557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.456570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.457017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.457388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.457402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.457850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.458275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.458289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.458722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.459145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.459159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.459532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.459961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.459975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.460353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.460774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.802 [2024-04-27 00:58:03.460787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.802 qpair failed and we were unable to recover it. 00:24:10.802 [2024-04-27 00:58:03.461245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.461623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.461636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.462135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.462580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.462593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.463048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.463519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.463533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.464087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.464477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.464490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.464917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.465338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.465352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.465723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.466142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.466157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.466588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.467213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.467227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.467698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.468075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.468089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.468544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.468982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.468994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.469414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.469793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.469806] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.470251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.470662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.470675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.471172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.471608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.471622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.471952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.472266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.472282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.472661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.473082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.473096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.473512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.473839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.473852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.474272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.474688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.474702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.475086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.475452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.475465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.475789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.476229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.476244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:10.803 [2024-04-27 00:58:03.476615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.476996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:10.803 [2024-04-27 00:58:03.477010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:10.803 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.477399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.477783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.477796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.478194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.478619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.478634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.479029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.479407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.479421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.479796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.480163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.480176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.480600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.480960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.480973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.481181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.481540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.481554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.481933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.482378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.482392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.482960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.483289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.483304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.483728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.484175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.484190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.484564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.485014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.485027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.485382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.485772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.485785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.486192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.486506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.486520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.486969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.487385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.487399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.487717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.488122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.488136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 [2024-04-27 00:58:03.488420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.488843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.488857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 00:58:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.064 [2024-04-27 00:58:03.489141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.489468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 [2024-04-27 00:58:03.489482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.064 qpair failed and we were unable to recover it. 00:24:11.064 00:58:03 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:11.064 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.064 [2024-04-27 00:58:03.489866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.064 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.065 [2024-04-27 00:58:03.490240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.490255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.490654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.491088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.491102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.491468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.491794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.491807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.492223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.492609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.492622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.492928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.493294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.493310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.493758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.494130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.494144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.494566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.494934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.494948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.495393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.495761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.495774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.496210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.496587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.496600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.496900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.497207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.497221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.497604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.498024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.498038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.498214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.498583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.498597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.498966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.499399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.499414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.499796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.500171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.500185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.500550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.501019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.501033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.501396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.501716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.501730] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.502175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.502600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.502615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.503094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.503564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.503578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.504001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.504442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.504456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.504780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.505140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.505155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.505511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.505887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.505901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.506266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.506711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.506725] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.507146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.507501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.507515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.507840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.508278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.508292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 [2024-04-27 00:58:03.508695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.509128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.509142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 Malloc0 00:24:11.065 [2024-04-27 00:58:03.509534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.509845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.509858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.065 [2024-04-27 00:58:03.510284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 00:58:03 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:11.065 [2024-04-27 00:58:03.510726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.510739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.065 qpair failed and we were unable to recover it. 00:24:11.065 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.065 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.065 [2024-04-27 00:58:03.511190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.065 [2024-04-27 00:58:03.511636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.511649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.512165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.512591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.512604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.513054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.513150] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.066 [2024-04-27 00:58:03.513461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.513475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.513846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.514284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.514298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.514620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.515043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.515056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.515431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.515794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.515807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.516227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.516628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.516641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.517016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.517394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.517408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.517779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.518243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.518256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.518722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.519166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.519180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.519602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.519916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.519929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.520408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.520852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.520866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.521309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.521727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.521740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.066 00:58:03 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:11.066 [2024-04-27 00:58:03.522187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.066 [2024-04-27 00:58:03.522621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.522635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.066 [2024-04-27 00:58:03.523058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.523515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.523528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.523907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.524328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.524342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.524736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.525195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.525209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.525662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.526018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.526032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.526497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.526917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.526931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.527373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.527813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.527826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.528209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.528607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.528620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.529052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.529494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.529507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.066 [2024-04-27 00:58:03.529936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 00:58:03 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.066 [2024-04-27 00:58:03.530379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.530394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.066 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.066 [2024-04-27 00:58:03.530766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.531214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.531227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.531651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.532097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.532111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.532554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.532992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.533008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.066 qpair failed and we were unable to recover it. 00:24:11.066 [2024-04-27 00:58:03.533401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.066 [2024-04-27 00:58:03.533818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.533831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.534278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.534722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.534735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.535110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.535482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.535495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.535942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.536332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.536345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.536792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.537176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.537190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.537640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.067 [2024-04-27 00:58:03.538084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.538098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 00:58:03 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.067 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.067 [2024-04-27 00:58:03.538543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.067 [2024-04-27 00:58:03.538985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.538998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.539448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.539891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.539903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.540298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.540682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.540694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.541119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.541361] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.067 [2024-04-27 00:58:03.541565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-04-27 00:58:03.541578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7958000b90 with addr=10.0.0.2, port=4420 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.543794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.067 [2024-04-27 00:58:03.543973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.067 [2024-04-27 00:58:03.543996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.067 [2024-04-27 00:58:03.544007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.067 [2024-04-27 00:58:03.544016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.067 [2024-04-27 00:58:03.544043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.067 00:58:03 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:11.067 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.067 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.067 [2024-04-27 00:58:03.553749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.067 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.067 [2024-04-27 00:58:03.553920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.067 [2024-04-27 00:58:03.553941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.067 [2024-04-27 00:58:03.553951] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.067 [2024-04-27 00:58:03.553960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.067 [2024-04-27 00:58:03.553981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 00:58:03 -- host/target_disconnect.sh@58 -- # wait 1815878 00:24:11.067 [2024-04-27 00:58:03.563775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.067 [2024-04-27 00:58:03.563901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.067 [2024-04-27 00:58:03.563918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.067 [2024-04-27 00:58:03.563925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.067 [2024-04-27 00:58:03.563930] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.067 [2024-04-27 00:58:03.563946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.573694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.067 [2024-04-27 00:58:03.573856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.067 [2024-04-27 00:58:03.573875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.067 [2024-04-27 00:58:03.573882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.067 [2024-04-27 00:58:03.573888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.067 [2024-04-27 00:58:03.573904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.583733] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.067 [2024-04-27 00:58:03.583865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.067 [2024-04-27 00:58:03.583882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.067 [2024-04-27 00:58:03.583889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.067 [2024-04-27 00:58:03.583895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.067 [2024-04-27 00:58:03.583911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.593815] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.067 [2024-04-27 00:58:03.593950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.067 [2024-04-27 00:58:03.593968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.067 [2024-04-27 00:58:03.593975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.067 [2024-04-27 00:58:03.593981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.067 [2024-04-27 00:58:03.593997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.603803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.067 [2024-04-27 00:58:03.603979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.067 [2024-04-27 00:58:03.603996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.067 [2024-04-27 00:58:03.604003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.067 [2024-04-27 00:58:03.604009] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.067 [2024-04-27 00:58:03.604025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.067 qpair failed and we were unable to recover it. 00:24:11.067 [2024-04-27 00:58:03.613802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.067 [2024-04-27 00:58:03.613931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.067 [2024-04-27 00:58:03.613947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.613953] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.613959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.613981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.623826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.623961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.623977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.623984] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.623990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.624005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.633860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.633992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.634009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.634016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.634022] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.634038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.643916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.644040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.644057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.644064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.644076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.644093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.653942] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.654078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.654094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.654100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.654106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.654122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.663971] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.664104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.664124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.664131] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.664136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.664152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.674017] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.674146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.674162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.674169] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.674175] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.674190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.684053] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.684185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.684202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.684208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.684214] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.684230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.694099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.694226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.694242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.694249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.694254] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.694270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.704078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.704209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.704225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.704232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.704240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.704256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.714129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.714253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.714269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.714276] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.714282] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.714298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.724153] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.724284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.724300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.724307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.724313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.068 [2024-04-27 00:58:03.724328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.068 qpair failed and we were unable to recover it. 00:24:11.068 [2024-04-27 00:58:03.734166] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.068 [2024-04-27 00:58:03.734294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.068 [2024-04-27 00:58:03.734309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.068 [2024-04-27 00:58:03.734316] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.068 [2024-04-27 00:58:03.734321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.069 [2024-04-27 00:58:03.734337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.069 qpair failed and we were unable to recover it. 00:24:11.069 [2024-04-27 00:58:03.744212] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.069 [2024-04-27 00:58:03.744337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.069 [2024-04-27 00:58:03.744353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.069 [2024-04-27 00:58:03.744360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.069 [2024-04-27 00:58:03.744366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.069 [2024-04-27 00:58:03.744381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.069 qpair failed and we were unable to recover it. 00:24:11.069 [2024-04-27 00:58:03.754226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.069 [2024-04-27 00:58:03.754354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.069 [2024-04-27 00:58:03.754369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.069 [2024-04-27 00:58:03.754376] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.069 [2024-04-27 00:58:03.754382] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.069 [2024-04-27 00:58:03.754397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.069 qpair failed and we were unable to recover it. 00:24:11.328 [2024-04-27 00:58:03.764201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.328 [2024-04-27 00:58:03.764324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.328 [2024-04-27 00:58:03.764340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.764347] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.764352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.764368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.774258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.774380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.774396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.774402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.774408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.774424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.784399] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.784543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.784559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.784566] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.784571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.784587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.794403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.794529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.794545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.794551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.794560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.794576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.804424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.804552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.804567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.804574] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.804580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.804595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.814441] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.814568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.814584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.814590] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.814596] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.814611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.824426] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.824589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.824605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.824612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.824618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.824634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.834395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.834518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.834534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.834541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.834546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.834563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.844501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.844623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.844639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.844646] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.844651] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.844668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.854510] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.854635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.854651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.854658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.854663] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.854680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.864466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.864640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.864656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.864663] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.864669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.864685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.874583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.874710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.874726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.874733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.874738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.874754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.884640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.884778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.884794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.884804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.884810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.884825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.329 [2024-04-27 00:58:03.894612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.329 [2024-04-27 00:58:03.894771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.329 [2024-04-27 00:58:03.894787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.329 [2024-04-27 00:58:03.894794] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.329 [2024-04-27 00:58:03.894799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.329 [2024-04-27 00:58:03.894814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.329 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:03.904565] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:03.904694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:03.904710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:03.904716] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:03.904722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:03.904738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:03.914727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:03.914863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:03.914879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:03.914885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:03.914891] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:03.914907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:03.924682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:03.924804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:03.924820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:03.924827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:03.924833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:03.924848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:03.934732] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:03.934861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:03.934876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:03.934883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:03.934889] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:03.934905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:03.944698] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:03.944825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:03.944841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:03.944848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:03.944854] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:03.944869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:03.954932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:03.955057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:03.955079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:03.955086] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:03.955092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:03.955109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:03.964828] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:03.964998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:03.965015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:03.965021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:03.965027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:03.965044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:03.974838] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:03.974968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:03.974988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:03.974994] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:03.975000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:03.975016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:03.984818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:03.984946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:03.984963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:03.984970] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:03.984975] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:03.984991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:03.994925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:03.995059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:03.995081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:03.995088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:03.995094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:03.995111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:04.004937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:04.005060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:04.005083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:04.005090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:04.005096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:04.005112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.330 [2024-04-27 00:58:04.014939] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.330 [2024-04-27 00:58:04.015079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.330 [2024-04-27 00:58:04.015095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.330 [2024-04-27 00:58:04.015102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.330 [2024-04-27 00:58:04.015107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.330 [2024-04-27 00:58:04.015127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.330 qpair failed and we were unable to recover it. 00:24:11.590 [2024-04-27 00:58:04.024916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.590 [2024-04-27 00:58:04.025043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.590 [2024-04-27 00:58:04.025059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.590 [2024-04-27 00:58:04.025066] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.590 [2024-04-27 00:58:04.025077] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.590 [2024-04-27 00:58:04.025093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.590 qpair failed and we were unable to recover it. 00:24:11.590 [2024-04-27 00:58:04.035016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.590 [2024-04-27 00:58:04.035148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.590 [2024-04-27 00:58:04.035164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.590 [2024-04-27 00:58:04.035171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.590 [2024-04-27 00:58:04.035176] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.590 [2024-04-27 00:58:04.035193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.590 qpair failed and we were unable to recover it. 00:24:11.590 [2024-04-27 00:58:04.045067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.590 [2024-04-27 00:58:04.045216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.590 [2024-04-27 00:58:04.045231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.590 [2024-04-27 00:58:04.045238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.590 [2024-04-27 00:58:04.045243] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.590 [2024-04-27 00:58:04.045260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.055103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.055238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.055254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.055261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.055267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.055283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.065080] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.065206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.065225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.065232] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.065238] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.065253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.075132] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.075261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.075276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.075283] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.075289] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.075305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.085159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.085280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.085296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.085303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.085308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.085324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.095232] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.095362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.095378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.095385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.095390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.095406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.105233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.105366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.105382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.105388] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.105394] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.105413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.115269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.115390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.115406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.115413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.115418] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.115435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.125220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.125347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.125364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.125371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.125377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.125393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.135451] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.135586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.135602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.135609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.135615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.135631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.145318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.145447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.145463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.145469] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.145475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.145491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.155289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.155453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.155469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.155475] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.155481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.155496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.165332] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.165477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.165493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.165500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.165506] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.165521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.175429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.175558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.591 [2024-04-27 00:58:04.175574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.591 [2024-04-27 00:58:04.175581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.591 [2024-04-27 00:58:04.175586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.591 [2024-04-27 00:58:04.175602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.591 qpair failed and we were unable to recover it. 00:24:11.591 [2024-04-27 00:58:04.185427] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.591 [2024-04-27 00:58:04.185558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.592 [2024-04-27 00:58:04.185574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.592 [2024-04-27 00:58:04.185580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.592 [2024-04-27 00:58:04.185586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.592 [2024-04-27 00:58:04.185602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.592 qpair failed and we were unable to recover it. 00:24:11.592 [2024-04-27 00:58:04.195483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.592 [2024-04-27 00:58:04.195787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.592 [2024-04-27 00:58:04.195804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.592 [2024-04-27 00:58:04.195810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.592 [2024-04-27 00:58:04.195820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.592 [2024-04-27 00:58:04.195835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.592 qpair failed and we were unable to recover it. 00:24:11.592 [2024-04-27 00:58:04.205437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.592 [2024-04-27 00:58:04.205563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.592 [2024-04-27 00:58:04.205579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.592 [2024-04-27 00:58:04.205586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.592 [2024-04-27 00:58:04.205592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.592 [2024-04-27 00:58:04.205607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.592 qpair failed and we were unable to recover it. 00:24:11.592 [2024-04-27 00:58:04.215474] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.592 [2024-04-27 00:58:04.215603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.592 [2024-04-27 00:58:04.215619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.592 [2024-04-27 00:58:04.215626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.592 [2024-04-27 00:58:04.215632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.592 [2024-04-27 00:58:04.215648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.592 qpair failed and we were unable to recover it. 00:24:11.592 [2024-04-27 00:58:04.225491] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.592 [2024-04-27 00:58:04.225617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.592 [2024-04-27 00:58:04.225633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.592 [2024-04-27 00:58:04.225640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.592 [2024-04-27 00:58:04.225647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.592 [2024-04-27 00:58:04.225662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.592 qpair failed and we were unable to recover it. 00:24:11.592 [2024-04-27 00:58:04.235517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.592 [2024-04-27 00:58:04.235644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.592 [2024-04-27 00:58:04.235661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.592 [2024-04-27 00:58:04.235668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.592 [2024-04-27 00:58:04.235674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.592 [2024-04-27 00:58:04.235690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.592 qpair failed and we were unable to recover it. 00:24:11.592 [2024-04-27 00:58:04.245555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.592 [2024-04-27 00:58:04.245682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.592 [2024-04-27 00:58:04.245698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.592 [2024-04-27 00:58:04.245705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.592 [2024-04-27 00:58:04.245711] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.592 [2024-04-27 00:58:04.245728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.592 qpair failed and we were unable to recover it. 00:24:11.592 [2024-04-27 00:58:04.255580] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.592 [2024-04-27 00:58:04.255710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.592 [2024-04-27 00:58:04.255727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.592 [2024-04-27 00:58:04.255734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.592 [2024-04-27 00:58:04.255739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.592 [2024-04-27 00:58:04.255755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.592 qpair failed and we were unable to recover it. 00:24:11.592 [2024-04-27 00:58:04.265642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.592 [2024-04-27 00:58:04.265809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.592 [2024-04-27 00:58:04.265827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.592 [2024-04-27 00:58:04.265835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.592 [2024-04-27 00:58:04.265841] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.592 [2024-04-27 00:58:04.265858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.592 qpair failed and we were unable to recover it. 00:24:11.592 [2024-04-27 00:58:04.275641] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.592 [2024-04-27 00:58:04.275769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.592 [2024-04-27 00:58:04.275785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.592 [2024-04-27 00:58:04.275792] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.592 [2024-04-27 00:58:04.275799] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.592 [2024-04-27 00:58:04.275815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.592 qpair failed and we were unable to recover it. 00:24:11.852 [2024-04-27 00:58:04.285675] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.285832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.285849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.285860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.285865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.853 [2024-04-27 00:58:04.285881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.295718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.295845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.295861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.295868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.295874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.853 [2024-04-27 00:58:04.295890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.305782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.305905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.305921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.305929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.305935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.853 [2024-04-27 00:58:04.305951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.315766] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.315930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.315947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.315954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.315960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.853 [2024-04-27 00:58:04.315977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.325909] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.326046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.326063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.326076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.326083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.853 [2024-04-27 00:58:04.326099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.335889] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.336015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.336032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.336039] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.336045] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.853 [2024-04-27 00:58:04.336061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.345900] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.346029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.346045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.346053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.346059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.853 [2024-04-27 00:58:04.346081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.356093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.356222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.356238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.356245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.356251] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.853 [2024-04-27 00:58:04.356267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.365950] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.366138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.366156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.366163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.366170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:11.853 [2024-04-27 00:58:04.366187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.376073] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.376285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.376321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.376333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.376343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.853 [2024-04-27 00:58:04.376368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.386072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.386215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.386245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.386253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.386261] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.853 [2024-04-27 00:58:04.386282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.396054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.396217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.396237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.396246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.396253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.853 [2024-04-27 00:58:04.396270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.853 qpair failed and we were unable to recover it. 00:24:11.853 [2024-04-27 00:58:04.406084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.853 [2024-04-27 00:58:04.406209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.853 [2024-04-27 00:58:04.406227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.853 [2024-04-27 00:58:04.406234] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.853 [2024-04-27 00:58:04.406240] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.406257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.416145] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.416273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.416291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.416298] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.416304] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.416320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.426135] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.426263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.426281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.426288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.426294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.426310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.436206] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.436335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.436353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.436360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.436366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.436383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.446180] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.446301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.446318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.446325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.446331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.446347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.456224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.456354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.456371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.456379] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.456385] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.456401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.466247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.466375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.466396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.466403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.466410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.466427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.476276] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.476414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.476431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.476438] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.476445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.476461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.486305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.486432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.486450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.486457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.486464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.486481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.496333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.496464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.496482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.496489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.496495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.496511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.506284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.506421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.506438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.506446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.506451] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.506471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.516386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.516521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.516539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.516546] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.516552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.516569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.526419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.526545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.526562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.526570] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.526576] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.526592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.536371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.854 [2024-04-27 00:58:04.536498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.854 [2024-04-27 00:58:04.536516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.854 [2024-04-27 00:58:04.536523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.854 [2024-04-27 00:58:04.536529] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.854 [2024-04-27 00:58:04.536546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.854 qpair failed and we were unable to recover it. 00:24:11.854 [2024-04-27 00:58:04.546396] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.855 [2024-04-27 00:58:04.546567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.855 [2024-04-27 00:58:04.546588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.855 [2024-04-27 00:58:04.546595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.855 [2024-04-27 00:58:04.546603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:11.855 [2024-04-27 00:58:04.546620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.855 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.556507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.556633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.556660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.556668] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.556674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.556691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.566448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.566588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.566606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.566614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.566620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.566637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.576564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.576692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.576710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.576718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.576724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.576741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.586517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.586648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.586668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.586677] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.586683] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.586701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.596534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.596673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.596692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.596700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.596708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.596728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.606643] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.606774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.606791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.606799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.606805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.606821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.616656] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.616783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.616800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.616807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.616813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.616830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.626701] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.626831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.626848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.626856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.626863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.626879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.636730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.636859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.636878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.636885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.636892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.636909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.646666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.646791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.646813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.646820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.646826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.646842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.656793] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.656922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.656939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.656947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.656953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.656969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.666758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.116 [2024-04-27 00:58:04.666888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.116 [2024-04-27 00:58:04.666906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.116 [2024-04-27 00:58:04.666913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.116 [2024-04-27 00:58:04.666920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.116 [2024-04-27 00:58:04.666936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.116 qpair failed and we were unable to recover it. 00:24:12.116 [2024-04-27 00:58:04.676863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.676989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.677006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.677014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.677020] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.677037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.686879] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.687005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.687022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.687030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.687036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.687055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.696959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.697119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.697137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.697144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.697150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.697166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.706933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.707093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.707110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.707118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.707124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.707141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.716972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.717111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.717128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.717136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.717141] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.717157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.726994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.727125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.727142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.727150] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.727156] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.727172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.737029] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.737167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.737188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.737195] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.737201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.737217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.747097] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.747224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.747242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.747249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.747255] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.747271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.757092] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.757227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.757244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.757252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.757258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.757274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.767039] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.767329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.767347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.767354] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.767360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.767377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.777147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.777276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.777293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.777300] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.777310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.777326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.787178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.787305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.787322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.787330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.787336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.787352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.797187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.797325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.797342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.797350] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.797356] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.797372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.117 [2024-04-27 00:58:04.807216] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.117 [2024-04-27 00:58:04.807344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.117 [2024-04-27 00:58:04.807364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.117 [2024-04-27 00:58:04.807372] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.117 [2024-04-27 00:58:04.807378] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.117 [2024-04-27 00:58:04.807395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.117 qpair failed and we were unable to recover it. 00:24:12.378 [2024-04-27 00:58:04.817279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.378 [2024-04-27 00:58:04.817410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.378 [2024-04-27 00:58:04.817431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.378 [2024-04-27 00:58:04.817439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.378 [2024-04-27 00:58:04.817446] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.817463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.827279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.827424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.827442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.827450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.827456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.827472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.837307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.837443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.837460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.837467] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.837474] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.837491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.847321] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.847458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.847475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.847483] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.847489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.847505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.857365] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.857663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.857681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.857688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.857695] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.857711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.867389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.867522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.867539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.867547] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.867557] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.867573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.877419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.877543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.877560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.877567] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.877573] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.877590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.887436] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.887556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.887574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.887581] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.887588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.887604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.897386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.897556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.897572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.897580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.897586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.897602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.907480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.907615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.907632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.907640] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.907646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.907662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.917536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.917668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.917686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.917694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.917700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.917716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.927568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.927707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.927724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.927732] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.927738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.927755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.937579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.937706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.937723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.937731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.937737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.937753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.947604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.947735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.379 [2024-04-27 00:58:04.947753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.379 [2024-04-27 00:58:04.947760] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.379 [2024-04-27 00:58:04.947767] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.379 [2024-04-27 00:58:04.947783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.379 qpair failed and we were unable to recover it. 00:24:12.379 [2024-04-27 00:58:04.957636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.379 [2024-04-27 00:58:04.957758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:04.957776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:04.957783] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:04.957793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:04.957809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:04.967575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:04.967710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:04.967727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:04.967735] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:04.967741] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:04.967757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:04.977726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:04.977863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:04.977880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:04.977887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:04.977893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:04.977910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:04.987701] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:04.987861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:04.987878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:04.987885] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:04.987892] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:04.987908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:04.997747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:04.997875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:04.997893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:04.997900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:04.997906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:04.997923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:05.007818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:05.007984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:05.008002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:05.008009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:05.008015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:05.008031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:05.017776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:05.017908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:05.017924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:05.017932] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:05.017938] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:05.017954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:05.027786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:05.027926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:05.027943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:05.027950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:05.027957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:05.027972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:05.037848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:05.037976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:05.037994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:05.038001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:05.038008] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:05.038024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:05.047855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:05.047979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:05.047997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:05.048007] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:05.048013] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:05.048029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:05.057905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:05.058046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:05.058064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:05.058076] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:05.058083] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:05.058099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.380 [2024-04-27 00:58:05.067969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.380 [2024-04-27 00:58:05.068138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.380 [2024-04-27 00:58:05.068156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.380 [2024-04-27 00:58:05.068164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.380 [2024-04-27 00:58:05.068170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.380 [2024-04-27 00:58:05.068186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.380 qpair failed and we were unable to recover it. 00:24:12.641 [2024-04-27 00:58:05.077976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.641 [2024-04-27 00:58:05.078114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.641 [2024-04-27 00:58:05.078135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.641 [2024-04-27 00:58:05.078142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.641 [2024-04-27 00:58:05.078149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.641 [2024-04-27 00:58:05.078166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.641 qpair failed and we were unable to recover it. 00:24:12.641 [2024-04-27 00:58:05.087993] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.641 [2024-04-27 00:58:05.088127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.641 [2024-04-27 00:58:05.088148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.641 [2024-04-27 00:58:05.088156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.641 [2024-04-27 00:58:05.088162] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.641 [2024-04-27 00:58:05.088179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.641 qpair failed and we were unable to recover it. 00:24:12.641 [2024-04-27 00:58:05.098026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.641 [2024-04-27 00:58:05.098161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.641 [2024-04-27 00:58:05.098180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.641 [2024-04-27 00:58:05.098188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.641 [2024-04-27 00:58:05.098194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.641 [2024-04-27 00:58:05.098210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.641 qpair failed and we were unable to recover it. 00:24:12.641 [2024-04-27 00:58:05.108051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.641 [2024-04-27 00:58:05.108185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.641 [2024-04-27 00:58:05.108203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.641 [2024-04-27 00:58:05.108211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.641 [2024-04-27 00:58:05.108217] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.641 [2024-04-27 00:58:05.108234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.641 qpair failed and we were unable to recover it. 00:24:12.641 [2024-04-27 00:58:05.118075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.641 [2024-04-27 00:58:05.118199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.641 [2024-04-27 00:58:05.118216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.641 [2024-04-27 00:58:05.118224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.641 [2024-04-27 00:58:05.118230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.641 [2024-04-27 00:58:05.118247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.641 qpair failed and we were unable to recover it. 00:24:12.641 [2024-04-27 00:58:05.128109] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.641 [2024-04-27 00:58:05.128244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.641 [2024-04-27 00:58:05.128262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.641 [2024-04-27 00:58:05.128269] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.641 [2024-04-27 00:58:05.128276] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.641 [2024-04-27 00:58:05.128293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.641 qpair failed and we were unable to recover it. 00:24:12.641 [2024-04-27 00:58:05.138139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.641 [2024-04-27 00:58:05.138267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.641 [2024-04-27 00:58:05.138284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.641 [2024-04-27 00:58:05.138295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.641 [2024-04-27 00:58:05.138301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.641 [2024-04-27 00:58:05.138318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.641 qpair failed and we were unable to recover it. 00:24:12.641 [2024-04-27 00:58:05.148151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.641 [2024-04-27 00:58:05.148280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.641 [2024-04-27 00:58:05.148300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.641 [2024-04-27 00:58:05.148307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.148314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.148331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.158185] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.158311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.158329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.158336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.158343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.158360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.168269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.168397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.168414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.168422] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.168428] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.168444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.178191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.178319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.178337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.178344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.178350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.178366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.188197] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.188338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.188356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.188363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.188370] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.188386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.198293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.198421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.198439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.198446] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.198452] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.198469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.208371] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.208498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.208515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.208523] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.208530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.208546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.218429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.218557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.218578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.218585] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.218591] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.218609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.228402] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.228531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.228549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.228561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.228568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.228585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.238425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.238724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.238743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.238750] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.238757] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.238774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.248435] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.248558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.248576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.248583] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.248589] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.248605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.258529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.258692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.258711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.258718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.258724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.258741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.268511] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.268642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.268659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.268666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.268673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.268689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.278534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.642 [2024-04-27 00:58:05.278670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.642 [2024-04-27 00:58:05.278687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.642 [2024-04-27 00:58:05.278695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.642 [2024-04-27 00:58:05.278701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.642 [2024-04-27 00:58:05.278718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.642 qpair failed and we were unable to recover it. 00:24:12.642 [2024-04-27 00:58:05.288614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.643 [2024-04-27 00:58:05.288755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.643 [2024-04-27 00:58:05.288772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.643 [2024-04-27 00:58:05.288780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.643 [2024-04-27 00:58:05.288786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.643 [2024-04-27 00:58:05.288802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.643 qpair failed and we were unable to recover it. 00:24:12.643 [2024-04-27 00:58:05.298533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.643 [2024-04-27 00:58:05.298660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.643 [2024-04-27 00:58:05.298678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.643 [2024-04-27 00:58:05.298686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.643 [2024-04-27 00:58:05.298692] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.643 [2024-04-27 00:58:05.298708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.643 qpair failed and we were unable to recover it. 00:24:12.643 [2024-04-27 00:58:05.308626] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.643 [2024-04-27 00:58:05.308753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.643 [2024-04-27 00:58:05.308771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.643 [2024-04-27 00:58:05.308778] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.643 [2024-04-27 00:58:05.308784] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.643 [2024-04-27 00:58:05.308800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.643 qpair failed and we were unable to recover it. 00:24:12.643 [2024-04-27 00:58:05.318653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.643 [2024-04-27 00:58:05.318779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.643 [2024-04-27 00:58:05.318800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.643 [2024-04-27 00:58:05.318807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.643 [2024-04-27 00:58:05.318813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.643 [2024-04-27 00:58:05.318830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.643 qpair failed and we were unable to recover it. 00:24:12.643 [2024-04-27 00:58:05.328615] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.643 [2024-04-27 00:58:05.328742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.643 [2024-04-27 00:58:05.328759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.643 [2024-04-27 00:58:05.328767] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.643 [2024-04-27 00:58:05.328773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.643 [2024-04-27 00:58:05.328789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.643 qpair failed and we were unable to recover it. 00:24:12.903 [2024-04-27 00:58:05.338739] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.903 [2024-04-27 00:58:05.338869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.903 [2024-04-27 00:58:05.338890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.903 [2024-04-27 00:58:05.338899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.903 [2024-04-27 00:58:05.338905] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.903 [2024-04-27 00:58:05.338923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.903 qpair failed and we were unable to recover it. 00:24:12.903 [2024-04-27 00:58:05.348726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.903 [2024-04-27 00:58:05.348859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.903 [2024-04-27 00:58:05.348879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.903 [2024-04-27 00:58:05.348887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.903 [2024-04-27 00:58:05.348893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.903 [2024-04-27 00:58:05.348911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.903 qpair failed and we were unable to recover it. 00:24:12.903 [2024-04-27 00:58:05.358801] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.903 [2024-04-27 00:58:05.358933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.903 [2024-04-27 00:58:05.358951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.358959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.358965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.358981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.368847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.368994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.369012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.369020] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.369026] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.369042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.378844] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.378976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.378994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.379001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.379007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.379024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.388851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.388983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.389000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.389008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.389016] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.389032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.398894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.399022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.399040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.399047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.399054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.399077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.408847] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.408975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.408996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.409003] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.409010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.409026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.418938] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.419064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.419088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.419096] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.419102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.419118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.428966] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.429103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.429120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.429128] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.429134] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.429150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.438991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.439128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.439145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.439152] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.439159] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.439175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.449005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.449171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.449189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.449196] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.449202] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.449222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.459087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.459215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.459233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.459240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.459247] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.459264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.469093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.469221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.469239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.469247] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.469253] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.469270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.479114] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.479238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.479255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.479263] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.479270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.904 [2024-04-27 00:58:05.479287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.904 qpair failed and we were unable to recover it. 00:24:12.904 [2024-04-27 00:58:05.489123] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.904 [2024-04-27 00:58:05.489248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.904 [2024-04-27 00:58:05.489266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.904 [2024-04-27 00:58:05.489274] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.904 [2024-04-27 00:58:05.489280] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.489296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:12.905 [2024-04-27 00:58:05.499171] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.905 [2024-04-27 00:58:05.499309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.905 [2024-04-27 00:58:05.499330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.905 [2024-04-27 00:58:05.499337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.905 [2024-04-27 00:58:05.499343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.499360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:12.905 [2024-04-27 00:58:05.509137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.905 [2024-04-27 00:58:05.509274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.905 [2024-04-27 00:58:05.509291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.905 [2024-04-27 00:58:05.509299] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.905 [2024-04-27 00:58:05.509306] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.509322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:12.905 [2024-04-27 00:58:05.519242] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.905 [2024-04-27 00:58:05.519370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.905 [2024-04-27 00:58:05.519388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.905 [2024-04-27 00:58:05.519396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.905 [2024-04-27 00:58:05.519402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.519418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:12.905 [2024-04-27 00:58:05.529246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.905 [2024-04-27 00:58:05.529375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.905 [2024-04-27 00:58:05.529393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.905 [2024-04-27 00:58:05.529400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.905 [2024-04-27 00:58:05.529406] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.529422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:12.905 [2024-04-27 00:58:05.539269] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.905 [2024-04-27 00:58:05.539398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.905 [2024-04-27 00:58:05.539415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.905 [2024-04-27 00:58:05.539423] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.905 [2024-04-27 00:58:05.539429] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.539449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:12.905 [2024-04-27 00:58:05.549258] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.905 [2024-04-27 00:58:05.549384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.905 [2024-04-27 00:58:05.549402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.905 [2024-04-27 00:58:05.549410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.905 [2024-04-27 00:58:05.549416] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.549432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:12.905 [2024-04-27 00:58:05.559287] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.905 [2024-04-27 00:58:05.559410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.905 [2024-04-27 00:58:05.559427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.905 [2024-04-27 00:58:05.559434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.905 [2024-04-27 00:58:05.559440] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.559457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:12.905 [2024-04-27 00:58:05.569313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.905 [2024-04-27 00:58:05.569439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.905 [2024-04-27 00:58:05.569456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.905 [2024-04-27 00:58:05.569464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.905 [2024-04-27 00:58:05.569470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.569486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:12.905 [2024-04-27 00:58:05.579380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.905 [2024-04-27 00:58:05.579509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.905 [2024-04-27 00:58:05.579526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.905 [2024-04-27 00:58:05.579534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.905 [2024-04-27 00:58:05.579539] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.579556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:12.905 [2024-04-27 00:58:05.589532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.905 [2024-04-27 00:58:05.589660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.905 [2024-04-27 00:58:05.589683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.905 [2024-04-27 00:58:05.589690] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.905 [2024-04-27 00:58:05.589696] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:12.905 [2024-04-27 00:58:05.589713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:12.905 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.599398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.599531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.599553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.599561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.599568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.178 [2024-04-27 00:58:05.599586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.178 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.609532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.609662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.609684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.609692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.609698] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.178 [2024-04-27 00:58:05.609715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.178 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.619532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.619671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.619690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.619697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.619704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.178 [2024-04-27 00:58:05.619720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.178 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.629552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.629696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.629714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.629722] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.629728] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.178 [2024-04-27 00:58:05.629747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.178 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.639556] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.639689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.639707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.639715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.639722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.178 [2024-04-27 00:58:05.639739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.178 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.649545] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.649667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.649687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.649694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.649701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.178 [2024-04-27 00:58:05.649717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.178 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.659638] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.659785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.659802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.659809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.659816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.178 [2024-04-27 00:58:05.659832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.178 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.669606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.669744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.669762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.669770] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.669776] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.178 [2024-04-27 00:58:05.669792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.178 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.679696] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.679827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.679849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.679856] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.679863] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.178 [2024-04-27 00:58:05.679879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.178 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.689713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.689843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.689861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.689868] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.689874] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.178 [2024-04-27 00:58:05.689890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.178 qpair failed and we were unable to recover it. 00:24:13.178 [2024-04-27 00:58:05.699716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.178 [2024-04-27 00:58:05.699881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.178 [2024-04-27 00:58:05.699898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.178 [2024-04-27 00:58:05.699906] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.178 [2024-04-27 00:58:05.699912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.699929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.709764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.709888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.709905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.709913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.709919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.709934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.719786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.719916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.719934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.719941] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.719950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.719967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.729853] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.729992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.730010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.730017] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.730023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.730039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.739857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.739986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.740003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.740011] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.740017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.740033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.749866] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.750000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.750017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.750024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.750031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.750047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.759899] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.760027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.760044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.760051] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.760058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.760082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.769941] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.770075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.770093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.770101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.770107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.770123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.779982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.780117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.780134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.780142] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.780149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.780165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.790094] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.790229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.790246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.790253] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.790259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.790276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.800085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.800219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.800236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.800243] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.800250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.800267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.810157] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.810318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.810335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.810342] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.810352] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.810368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.820138] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.820274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.820293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.820301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.820307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.820324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.830125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.830255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.179 [2024-04-27 00:58:05.830273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.179 [2024-04-27 00:58:05.830280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.179 [2024-04-27 00:58:05.830286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.179 [2024-04-27 00:58:05.830303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.179 qpair failed and we were unable to recover it. 00:24:13.179 [2024-04-27 00:58:05.840167] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.179 [2024-04-27 00:58:05.840296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.180 [2024-04-27 00:58:05.840313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.180 [2024-04-27 00:58:05.840320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.180 [2024-04-27 00:58:05.840327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.180 [2024-04-27 00:58:05.840343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.180 qpair failed and we were unable to recover it. 00:24:13.180 [2024-04-27 00:58:05.850189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.180 [2024-04-27 00:58:05.850315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.180 [2024-04-27 00:58:05.850332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.180 [2024-04-27 00:58:05.850339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.180 [2024-04-27 00:58:05.850345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.180 [2024-04-27 00:58:05.850361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.180 qpair failed and we were unable to recover it. 00:24:13.180 [2024-04-27 00:58:05.860207] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.180 [2024-04-27 00:58:05.860386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.180 [2024-04-27 00:58:05.860410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.180 [2024-04-27 00:58:05.860420] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.180 [2024-04-27 00:58:05.860427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.180 [2024-04-27 00:58:05.860446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.180 qpair failed and we were unable to recover it. 00:24:13.451 [2024-04-27 00:58:05.870257] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.451 [2024-04-27 00:58:05.870388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.451 [2024-04-27 00:58:05.870409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.451 [2024-04-27 00:58:05.870417] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.451 [2024-04-27 00:58:05.870423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.451 [2024-04-27 00:58:05.870440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.451 qpair failed and we were unable to recover it. 00:24:13.451 [2024-04-27 00:58:05.880281] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.451 [2024-04-27 00:58:05.880409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.451 [2024-04-27 00:58:05.880429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.451 [2024-04-27 00:58:05.880437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.451 [2024-04-27 00:58:05.880443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.451 [2024-04-27 00:58:05.880461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.451 qpair failed and we were unable to recover it. 00:24:13.451 [2024-04-27 00:58:05.890303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.451 [2024-04-27 00:58:05.890430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.451 [2024-04-27 00:58:05.890449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.451 [2024-04-27 00:58:05.890457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.451 [2024-04-27 00:58:05.890463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.451 [2024-04-27 00:58:05.890480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.451 qpair failed and we were unable to recover it. 00:24:13.451 [2024-04-27 00:58:05.900334] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.451 [2024-04-27 00:58:05.900463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.451 [2024-04-27 00:58:05.900482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.451 [2024-04-27 00:58:05.900489] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.451 [2024-04-27 00:58:05.900498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.451 [2024-04-27 00:58:05.900515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.451 qpair failed and we were unable to recover it. 00:24:13.451 [2024-04-27 00:58:05.910388] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.451 [2024-04-27 00:58:05.910515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.451 [2024-04-27 00:58:05.910533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.451 [2024-04-27 00:58:05.910540] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.451 [2024-04-27 00:58:05.910547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.451 [2024-04-27 00:58:05.910563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.451 qpair failed and we were unable to recover it. 00:24:13.451 [2024-04-27 00:58:05.920389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.451 [2024-04-27 00:58:05.920520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.451 [2024-04-27 00:58:05.920538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.451 [2024-04-27 00:58:05.920545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:05.920552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:05.920568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:05.930413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:05.930536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:05.930554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:05.930561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:05.930567] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:05.930583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:05.940452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:05.940604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:05.940621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:05.940629] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:05.940636] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:05.940652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:05.950480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:05.950611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:05.950630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:05.950638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:05.950646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:05.950663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:05.960499] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:05.960625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:05.960643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:05.960651] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:05.960657] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:05.960673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:05.970549] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:05.970676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:05.970693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:05.970702] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:05.970708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:05.970724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:05.980576] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:05.980709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:05.980726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:05.980734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:05.980740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:05.980756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:05.990590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:05.990714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:05.990732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:05.990742] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:05.990749] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:05.990765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:06.000760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:06.000890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:06.000907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:06.000915] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:06.000921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:06.000938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:06.010697] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:06.010859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:06.010876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:06.010883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:06.010890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:06.010906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:06.020679] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:06.020808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:06.020825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:06.020833] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:06.020839] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:06.020855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:06.030695] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:06.030857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:06.030875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:06.030882] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:06.030888] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:13.452 [2024-04-27 00:58:06.030904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:06.031014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7377e0 is same with the state(5) to be set 00:24:13.452 [2024-04-27 00:58:06.040727] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:06.040857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:06.040879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:06.040887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:06.040894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.452 [2024-04-27 00:58:06.040912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.452 qpair failed and we were unable to recover it. 00:24:13.452 [2024-04-27 00:58:06.050702] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.452 [2024-04-27 00:58:06.050829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.452 [2024-04-27 00:58:06.050846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.452 [2024-04-27 00:58:06.050854] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.452 [2024-04-27 00:58:06.050860] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.452 [2024-04-27 00:58:06.050876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.453 qpair failed and we were unable to recover it. 00:24:13.453 [2024-04-27 00:58:06.060770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.453 [2024-04-27 00:58:06.060901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.453 [2024-04-27 00:58:06.060920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.453 [2024-04-27 00:58:06.060928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.453 [2024-04-27 00:58:06.060934] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.453 [2024-04-27 00:58:06.060951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.453 qpair failed and we were unable to recover it. 00:24:13.453 [2024-04-27 00:58:06.070807] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.453 [2024-04-27 00:58:06.070938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.453 [2024-04-27 00:58:06.070955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.453 [2024-04-27 00:58:06.070962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.453 [2024-04-27 00:58:06.070969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.453 [2024-04-27 00:58:06.070986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.453 qpair failed and we were unable to recover it. 00:24:13.453 [2024-04-27 00:58:06.080849] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.453 [2024-04-27 00:58:06.080981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.453 [2024-04-27 00:58:06.081001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.453 [2024-04-27 00:58:06.081008] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.453 [2024-04-27 00:58:06.081014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.453 [2024-04-27 00:58:06.081030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.453 qpair failed and we were unable to recover it. 00:24:13.453 [2024-04-27 00:58:06.090868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.453 [2024-04-27 00:58:06.091000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.453 [2024-04-27 00:58:06.091016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.453 [2024-04-27 00:58:06.091024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.453 [2024-04-27 00:58:06.091030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.453 [2024-04-27 00:58:06.091046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.453 qpair failed and we were unable to recover it. 00:24:13.453 [2024-04-27 00:58:06.100886] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.453 [2024-04-27 00:58:06.101013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.453 [2024-04-27 00:58:06.101030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.453 [2024-04-27 00:58:06.101038] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.453 [2024-04-27 00:58:06.101044] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.453 [2024-04-27 00:58:06.101061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.453 qpair failed and we were unable to recover it. 00:24:13.453 [2024-04-27 00:58:06.110926] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.453 [2024-04-27 00:58:06.111053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.453 [2024-04-27 00:58:06.111069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.453 [2024-04-27 00:58:06.111081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.453 [2024-04-27 00:58:06.111087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.453 [2024-04-27 00:58:06.111104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.453 qpair failed and we were unable to recover it. 00:24:13.453 [2024-04-27 00:58:06.120916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.453 [2024-04-27 00:58:06.121040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.453 [2024-04-27 00:58:06.121058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.453 [2024-04-27 00:58:06.121065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.453 [2024-04-27 00:58:06.121076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.453 [2024-04-27 00:58:06.121097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.453 qpair failed and we were unable to recover it. 00:24:13.453 [2024-04-27 00:58:06.131020] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.453 [2024-04-27 00:58:06.131168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.453 [2024-04-27 00:58:06.131185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.453 [2024-04-27 00:58:06.131192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.453 [2024-04-27 00:58:06.131198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.453 [2024-04-27 00:58:06.131215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.453 qpair failed and we were unable to recover it. 00:24:13.453 [2024-04-27 00:58:06.140945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.453 [2024-04-27 00:58:06.141077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.453 [2024-04-27 00:58:06.141093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.453 [2024-04-27 00:58:06.141101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.453 [2024-04-27 00:58:06.141107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.453 [2024-04-27 00:58:06.141124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.453 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.150970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.151101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.151117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.151125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.151131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.151147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.161079] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.161209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.161225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.161233] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.161239] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.161255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.171107] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.171237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.171256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.171264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.171270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.171287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.181112] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.181242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.181258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.181265] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.181271] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.181288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.191182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.191309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.191326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.191333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.191339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.191356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.201107] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.201228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.201245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.201252] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.201258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.201274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.211214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.211339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.211356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.211363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.211374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.211390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.221189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.221314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.221330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.221338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.221344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.221360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.231274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.231403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.231419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.231426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.231433] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.231449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.241293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.241419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.241435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.241443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.241449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.241464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.251313] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.251439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.251455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.251463] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.251469] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.251486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.261294] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.261424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.261441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.714 [2024-04-27 00:58:06.261448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.714 [2024-04-27 00:58:06.261454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.714 [2024-04-27 00:58:06.261471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.714 qpair failed and we were unable to recover it. 00:24:13.714 [2024-04-27 00:58:06.271386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.714 [2024-04-27 00:58:06.271519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.714 [2024-04-27 00:58:06.271536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.271543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.271550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.271567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.281410] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.281540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.281557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.281564] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.281571] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.281587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.291456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.291584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.291601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.291609] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.291615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.291631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.301444] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.301570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.301586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.301596] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.301602] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.301619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.311403] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.311694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.311712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.311719] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.311726] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.311743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.321525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.321655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.321671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.321678] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.321685] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.321702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.331552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.331676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.331694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.331701] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.331708] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.331725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.341509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.341635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.341653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.341660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.341668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.341685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.351613] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.351739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.351755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.351762] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.351768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.351785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.361593] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.361720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.361736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.361744] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.361750] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.361766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.371600] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.371738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.371754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.371761] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.371768] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.371784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.381672] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.381801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.381819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.381827] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.381834] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.715 [2024-04-27 00:58:06.381850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.715 qpair failed and we were unable to recover it. 00:24:13.715 [2024-04-27 00:58:06.391724] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.715 [2024-04-27 00:58:06.391859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.715 [2024-04-27 00:58:06.391876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.715 [2024-04-27 00:58:06.391887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.715 [2024-04-27 00:58:06.391893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.716 [2024-04-27 00:58:06.391909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.716 qpair failed and we were unable to recover it. 00:24:13.716 [2024-04-27 00:58:06.401753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.716 [2024-04-27 00:58:06.401880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.716 [2024-04-27 00:58:06.401897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.716 [2024-04-27 00:58:06.401905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.716 [2024-04-27 00:58:06.401911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.716 [2024-04-27 00:58:06.401927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.716 qpair failed and we were unable to recover it. 00:24:13.975 [2024-04-27 00:58:06.411778] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.975 [2024-04-27 00:58:06.411906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.975 [2024-04-27 00:58:06.411922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.975 [2024-04-27 00:58:06.411929] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.975 [2024-04-27 00:58:06.411935] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.975 [2024-04-27 00:58:06.411952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.975 qpair failed and we were unable to recover it. 00:24:13.975 [2024-04-27 00:58:06.421776] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.975 [2024-04-27 00:58:06.421915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.975 [2024-04-27 00:58:06.421931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.975 [2024-04-27 00:58:06.421939] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.975 [2024-04-27 00:58:06.421945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.975 [2024-04-27 00:58:06.421961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.975 qpair failed and we were unable to recover it. 00:24:13.975 [2024-04-27 00:58:06.431848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.431976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.431993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.432000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.432007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.432023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.441860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.441980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.441996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.442004] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.442010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.442026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.451809] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.451931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.451947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.451955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.451961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.451977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.461893] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.462050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.462066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.462080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.462086] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.462103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.471949] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.472082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.472099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.472106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.472113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.472129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.481975] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.482109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.482129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.482136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.482142] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.482158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.492000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.492133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.492150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.492157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.492164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.492180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.502201] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.502328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.502344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.502351] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.502357] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.502374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.512050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.512178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.512195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.512202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.512208] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.512224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.522075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.522200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.522217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.522224] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.522230] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.522252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.532099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.532238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.532255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.532262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.532268] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.532284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.542139] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.542263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.542279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.542286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.542291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.542307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.552181] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.552308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.552324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.552331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.552337] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.552354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.562203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.562330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.562346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.562353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.562360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.562376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.572161] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.572296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.572315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.572322] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.572328] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.572345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.582250] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.582415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.582432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.582439] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.582445] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.582461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.592289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.592419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.592436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.592443] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.592449] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.592465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.602363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.602528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.602544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.602553] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.602560] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.976 [2024-04-27 00:58:06.602576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.976 qpair failed and we were unable to recover it. 00:24:13.976 [2024-04-27 00:58:06.612350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.976 [2024-04-27 00:58:06.612478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.976 [2024-04-27 00:58:06.612494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.976 [2024-04-27 00:58:06.612501] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.976 [2024-04-27 00:58:06.612511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.977 [2024-04-27 00:58:06.612528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.977 qpair failed and we were unable to recover it. 00:24:13.977 [2024-04-27 00:58:06.622307] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.977 [2024-04-27 00:58:06.622435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.977 [2024-04-27 00:58:06.622451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.977 [2024-04-27 00:58:06.622459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.977 [2024-04-27 00:58:06.622465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.977 [2024-04-27 00:58:06.622481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.977 qpair failed and we were unable to recover it. 00:24:13.977 [2024-04-27 00:58:06.632319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.977 [2024-04-27 00:58:06.632459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.977 [2024-04-27 00:58:06.632475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.977 [2024-04-27 00:58:06.632482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.977 [2024-04-27 00:58:06.632489] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.977 [2024-04-27 00:58:06.632505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.977 qpair failed and we were unable to recover it. 00:24:13.977 [2024-04-27 00:58:06.642359] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.977 [2024-04-27 00:58:06.642486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.977 [2024-04-27 00:58:06.642505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.977 [2024-04-27 00:58:06.642512] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.977 [2024-04-27 00:58:06.642519] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.977 [2024-04-27 00:58:06.642535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.977 qpair failed and we were unable to recover it. 00:24:13.977 [2024-04-27 00:58:06.652469] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.977 [2024-04-27 00:58:06.652636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.977 [2024-04-27 00:58:06.652653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.977 [2024-04-27 00:58:06.652660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.977 [2024-04-27 00:58:06.652666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.977 [2024-04-27 00:58:06.652683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.977 qpair failed and we were unable to recover it. 00:24:13.977 [2024-04-27 00:58:06.662466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:13.977 [2024-04-27 00:58:06.662595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:13.977 [2024-04-27 00:58:06.662612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:13.977 [2024-04-27 00:58:06.662619] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:13.977 [2024-04-27 00:58:06.662625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:13.977 [2024-04-27 00:58:06.662641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.977 qpair failed and we were unable to recover it. 00:24:14.237 [2024-04-27 00:58:06.672517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.237 [2024-04-27 00:58:06.672641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.237 [2024-04-27 00:58:06.672658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.237 [2024-04-27 00:58:06.672665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.237 [2024-04-27 00:58:06.672671] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.237 [2024-04-27 00:58:06.672687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.237 qpair failed and we were unable to recover it. 00:24:14.237 [2024-04-27 00:58:06.682547] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.237 [2024-04-27 00:58:06.682673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.237 [2024-04-27 00:58:06.682689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.237 [2024-04-27 00:58:06.682696] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.237 [2024-04-27 00:58:06.682703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.237 [2024-04-27 00:58:06.682720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.237 qpair failed and we were unable to recover it. 00:24:14.237 [2024-04-27 00:58:06.692546] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.237 [2024-04-27 00:58:06.692680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.237 [2024-04-27 00:58:06.692697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.237 [2024-04-27 00:58:06.692704] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.237 [2024-04-27 00:58:06.692710] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.237 [2024-04-27 00:58:06.692726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.237 qpair failed and we were unable to recover it. 00:24:14.237 [2024-04-27 00:58:06.702583] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.237 [2024-04-27 00:58:06.702707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.237 [2024-04-27 00:58:06.702723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.237 [2024-04-27 00:58:06.702731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.237 [2024-04-27 00:58:06.702740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.702756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.712640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.712774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.712790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.712798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.712804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.712821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.722681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.722817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.722834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.722841] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.722847] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.722863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.732666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.732798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.732814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.732821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.732827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.732843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.742640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.742767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.742784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.742791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.742797] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.742813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.752718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.752843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.752860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.752867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.752873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.752890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.762770] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.762897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.762914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.762921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.762927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.762943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.772729] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.772850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.772866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.772874] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.772881] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.772897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.782816] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.782942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.782959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.782966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.782972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.782988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.792860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.792987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.793003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.793014] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.793021] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.793037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.802898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.803023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.803039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.803047] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.803053] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.803075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.812915] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.813040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.813057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.813064] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.813075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.813092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.822880] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.823003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.823020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.823027] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.823033] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.823049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.832923] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.238 [2024-04-27 00:58:06.833052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.238 [2024-04-27 00:58:06.833068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.238 [2024-04-27 00:58:06.833081] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.238 [2024-04-27 00:58:06.833087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.238 [2024-04-27 00:58:06.833104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.238 qpair failed and we were unable to recover it. 00:24:14.238 [2024-04-27 00:58:06.842933] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.239 [2024-04-27 00:58:06.843054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.239 [2024-04-27 00:58:06.843077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.239 [2024-04-27 00:58:06.843085] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.239 [2024-04-27 00:58:06.843092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.239 [2024-04-27 00:58:06.843109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.239 qpair failed and we were unable to recover it. 00:24:14.239 [2024-04-27 00:58:06.853044] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.239 [2024-04-27 00:58:06.853169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.239 [2024-04-27 00:58:06.853185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.239 [2024-04-27 00:58:06.853193] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.239 [2024-04-27 00:58:06.853199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.239 [2024-04-27 00:58:06.853215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.239 qpair failed and we were unable to recover it. 00:24:14.239 [2024-04-27 00:58:06.863050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.239 [2024-04-27 00:58:06.863184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.239 [2024-04-27 00:58:06.863201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.239 [2024-04-27 00:58:06.863208] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.239 [2024-04-27 00:58:06.863215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.239 [2024-04-27 00:58:06.863231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.239 qpair failed and we were unable to recover it. 00:24:14.239 [2024-04-27 00:58:06.873183] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.239 [2024-04-27 00:58:06.873307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.239 [2024-04-27 00:58:06.873323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.239 [2024-04-27 00:58:06.873330] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.239 [2024-04-27 00:58:06.873336] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.239 [2024-04-27 00:58:06.873353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.239 qpair failed and we were unable to recover it. 00:24:14.239 [2024-04-27 00:58:06.883096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.239 [2024-04-27 00:58:06.883224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.239 [2024-04-27 00:58:06.883244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.239 [2024-04-27 00:58:06.883251] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.239 [2024-04-27 00:58:06.883257] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.239 [2024-04-27 00:58:06.883274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.239 qpair failed and we were unable to recover it. 00:24:14.239 [2024-04-27 00:58:06.893084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.239 [2024-04-27 00:58:06.893211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.239 [2024-04-27 00:58:06.893228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.239 [2024-04-27 00:58:06.893235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.239 [2024-04-27 00:58:06.893242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.239 [2024-04-27 00:58:06.893259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.239 qpair failed and we were unable to recover it. 00:24:14.239 [2024-04-27 00:58:06.903151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.239 [2024-04-27 00:58:06.903278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.239 [2024-04-27 00:58:06.903294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.239 [2024-04-27 00:58:06.903302] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.239 [2024-04-27 00:58:06.903308] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.239 [2024-04-27 00:58:06.903325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.239 qpair failed and we were unable to recover it. 00:24:14.239 [2024-04-27 00:58:06.913173] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.239 [2024-04-27 00:58:06.913299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.239 [2024-04-27 00:58:06.913315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.239 [2024-04-27 00:58:06.913323] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.239 [2024-04-27 00:58:06.913329] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.239 [2024-04-27 00:58:06.913346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.239 qpair failed and we were unable to recover it. 00:24:14.239 [2024-04-27 00:58:06.923158] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.239 [2024-04-27 00:58:06.923458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.239 [2024-04-27 00:58:06.923475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.239 [2024-04-27 00:58:06.923482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.239 [2024-04-27 00:58:06.923488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.239 [2024-04-27 00:58:06.923508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.239 qpair failed and we were unable to recover it. 00:24:14.499 [2024-04-27 00:58:06.933238] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.499 [2024-04-27 00:58:06.933362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.499 [2024-04-27 00:58:06.933379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.499 [2024-04-27 00:58:06.933386] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.499 [2024-04-27 00:58:06.933392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.499 [2024-04-27 00:58:06.933408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.499 qpair failed and we were unable to recover it. 00:24:14.499 [2024-04-27 00:58:06.943206] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.499 [2024-04-27 00:58:06.943341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.499 [2024-04-27 00:58:06.943359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.499 [2024-04-27 00:58:06.943366] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.499 [2024-04-27 00:58:06.943373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.499 [2024-04-27 00:58:06.943390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.499 qpair failed and we were unable to recover it. 00:24:14.499 [2024-04-27 00:58:06.953297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.499 [2024-04-27 00:58:06.953424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.499 [2024-04-27 00:58:06.953441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.499 [2024-04-27 00:58:06.953448] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.499 [2024-04-27 00:58:06.953454] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.499 [2024-04-27 00:58:06.953470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.499 qpair failed and we were unable to recover it. 00:24:14.499 [2024-04-27 00:58:06.963256] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.499 [2024-04-27 00:58:06.963385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.499 [2024-04-27 00:58:06.963402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.499 [2024-04-27 00:58:06.963409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.499 [2024-04-27 00:58:06.963415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.499 [2024-04-27 00:58:06.963431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.499 qpair failed and we were unable to recover it. 00:24:14.499 [2024-04-27 00:58:06.973328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.499 [2024-04-27 00:58:06.973471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.499 [2024-04-27 00:58:06.973491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.499 [2024-04-27 00:58:06.973499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.499 [2024-04-27 00:58:06.973505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.499 [2024-04-27 00:58:06.973523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.499 qpair failed and we were unable to recover it. 00:24:14.499 [2024-04-27 00:58:06.983374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.499 [2024-04-27 00:58:06.983502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.499 [2024-04-27 00:58:06.983519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:06.983526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:06.983532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:06.983547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:06.993425] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:06.993550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:06.993566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:06.993573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:06.993579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:06.993595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.003437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.003563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.003579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.003586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.003592] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.003608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.013409] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.013542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.013558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.013565] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.013574] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.013591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.023428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.023593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.023610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.023618] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.023624] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.023641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.033459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.033583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.033600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.033607] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.033613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.033631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.043503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.043638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.043654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.043661] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.043667] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.043683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.053507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.053631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.053648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.053655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.053661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.053677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.063604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.063733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.063750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.063757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.063763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.063779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.073633] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.073774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.073791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.073798] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.073804] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.073820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.083638] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.083765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.083781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.083788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.083794] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.083810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.093670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.093795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.093811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.093818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.093824] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.093840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.103835] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.103964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.103981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.103988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.103997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.500 [2024-04-27 00:58:07.104014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.500 qpair failed and we were unable to recover it. 00:24:14.500 [2024-04-27 00:58:07.113684] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.500 [2024-04-27 00:58:07.113811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.500 [2024-04-27 00:58:07.113827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.500 [2024-04-27 00:58:07.113835] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.500 [2024-04-27 00:58:07.113842] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.501 [2024-04-27 00:58:07.113858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.501 qpair failed and we were unable to recover it. 00:24:14.501 [2024-04-27 00:58:07.123716] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.501 [2024-04-27 00:58:07.123884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.501 [2024-04-27 00:58:07.123900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.501 [2024-04-27 00:58:07.123908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.501 [2024-04-27 00:58:07.123914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.501 [2024-04-27 00:58:07.123932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.501 qpair failed and we were unable to recover it. 00:24:14.501 [2024-04-27 00:58:07.133804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.501 [2024-04-27 00:58:07.133930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.501 [2024-04-27 00:58:07.133947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.501 [2024-04-27 00:58:07.133954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.501 [2024-04-27 00:58:07.133961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.501 [2024-04-27 00:58:07.133977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.501 qpair failed and we were unable to recover it. 00:24:14.501 [2024-04-27 00:58:07.143760] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.501 [2024-04-27 00:58:07.143890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.501 [2024-04-27 00:58:07.143906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.501 [2024-04-27 00:58:07.143914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.501 [2024-04-27 00:58:07.143920] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.501 [2024-04-27 00:58:07.143936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.501 qpair failed and we were unable to recover it. 00:24:14.501 [2024-04-27 00:58:07.153901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.501 [2024-04-27 00:58:07.154024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.501 [2024-04-27 00:58:07.154040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.501 [2024-04-27 00:58:07.154048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.501 [2024-04-27 00:58:07.154054] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.501 [2024-04-27 00:58:07.154077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.501 qpair failed and we were unable to recover it. 00:24:14.501 [2024-04-27 00:58:07.163903] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.501 [2024-04-27 00:58:07.164048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.501 [2024-04-27 00:58:07.164064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.501 [2024-04-27 00:58:07.164079] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.501 [2024-04-27 00:58:07.164085] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.501 [2024-04-27 00:58:07.164102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.501 qpair failed and we were unable to recover it. 00:24:14.501 [2024-04-27 00:58:07.173838] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.501 [2024-04-27 00:58:07.173963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.501 [2024-04-27 00:58:07.173979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.501 [2024-04-27 00:58:07.173987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.501 [2024-04-27 00:58:07.173993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.501 [2024-04-27 00:58:07.174010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.501 qpair failed and we were unable to recover it. 00:24:14.501 [2024-04-27 00:58:07.183923] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.501 [2024-04-27 00:58:07.184053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.501 [2024-04-27 00:58:07.184074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.501 [2024-04-27 00:58:07.184082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.501 [2024-04-27 00:58:07.184088] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.501 [2024-04-27 00:58:07.184105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.501 qpair failed and we were unable to recover it. 00:24:14.761 [2024-04-27 00:58:07.193995] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.761 [2024-04-27 00:58:07.194129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.761 [2024-04-27 00:58:07.194146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.761 [2024-04-27 00:58:07.194156] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.761 [2024-04-27 00:58:07.194171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.761 [2024-04-27 00:58:07.194189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.761 qpair failed and we were unable to recover it. 00:24:14.761 [2024-04-27 00:58:07.204010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.761 [2024-04-27 00:58:07.204137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.761 [2024-04-27 00:58:07.204154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.761 [2024-04-27 00:58:07.204161] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.761 [2024-04-27 00:58:07.204167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.761 [2024-04-27 00:58:07.204184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.761 qpair failed and we were unable to recover it. 00:24:14.761 [2024-04-27 00:58:07.214028] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.761 [2024-04-27 00:58:07.214329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.761 [2024-04-27 00:58:07.214347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.761 [2024-04-27 00:58:07.214353] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.761 [2024-04-27 00:58:07.214360] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.761 [2024-04-27 00:58:07.214377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.761 qpair failed and we were unable to recover it. 00:24:14.761 [2024-04-27 00:58:07.224043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.761 [2024-04-27 00:58:07.224185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.761 [2024-04-27 00:58:07.224202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.761 [2024-04-27 00:58:07.224209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.761 [2024-04-27 00:58:07.224215] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.761 [2024-04-27 00:58:07.224232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.761 qpair failed and we were unable to recover it. 00:24:14.761 [2024-04-27 00:58:07.234005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.761 [2024-04-27 00:58:07.234153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.761 [2024-04-27 00:58:07.234170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.761 [2024-04-27 00:58:07.234177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.761 [2024-04-27 00:58:07.234183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.761 [2024-04-27 00:58:07.234200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.761 qpair failed and we were unable to recover it. 00:24:14.761 [2024-04-27 00:58:07.244126] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.244256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.244273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.244281] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.244287] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.244303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.254147] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.254278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.254296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.254304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.254310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.254327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.264217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.264343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.264360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.264367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.264373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.264389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.274230] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.274389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.274405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.274413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.274419] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.274435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.284247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.284376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.284397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.284405] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.284411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.284429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.294300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.294434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.294450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.294458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.294464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.294481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.304280] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.304421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.304437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.304444] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.304450] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.304466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.314327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.314453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.314469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.314476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.314482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.314498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.324368] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.324502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.324518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.324525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.324532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.324552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.334398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.334520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.334537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.334544] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.334550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.334566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.344389] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.344516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.344534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.344541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.344547] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.344563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.354409] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.354538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.354555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.354562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.354569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.354584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.364466] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.364587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.364604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.364612] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.762 [2024-04-27 00:58:07.364618] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.762 [2024-04-27 00:58:07.364636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.762 qpair failed and we were unable to recover it. 00:24:14.762 [2024-04-27 00:58:07.374417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.762 [2024-04-27 00:58:07.374553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.762 [2024-04-27 00:58:07.374573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.762 [2024-04-27 00:58:07.374580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.763 [2024-04-27 00:58:07.374586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.763 [2024-04-27 00:58:07.374603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.763 qpair failed and we were unable to recover it. 00:24:14.763 [2024-04-27 00:58:07.384501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.763 [2024-04-27 00:58:07.384624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.763 [2024-04-27 00:58:07.384640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.763 [2024-04-27 00:58:07.384647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.763 [2024-04-27 00:58:07.384653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.763 [2024-04-27 00:58:07.384669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.763 qpair failed and we were unable to recover it. 00:24:14.763 [2024-04-27 00:58:07.394555] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.763 [2024-04-27 00:58:07.394683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.763 [2024-04-27 00:58:07.394700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.763 [2024-04-27 00:58:07.394707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.763 [2024-04-27 00:58:07.394713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.763 [2024-04-27 00:58:07.394729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.763 qpair failed and we were unable to recover it. 00:24:14.763 [2024-04-27 00:58:07.404587] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.763 [2024-04-27 00:58:07.404715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.763 [2024-04-27 00:58:07.404732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.763 [2024-04-27 00:58:07.404739] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.763 [2024-04-27 00:58:07.404745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.763 [2024-04-27 00:58:07.404762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.763 qpair failed and we were unable to recover it. 00:24:14.763 [2024-04-27 00:58:07.414538] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.763 [2024-04-27 00:58:07.414662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.763 [2024-04-27 00:58:07.414678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.763 [2024-04-27 00:58:07.414685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.763 [2024-04-27 00:58:07.414691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.763 [2024-04-27 00:58:07.414711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.763 qpair failed and we were unable to recover it. 00:24:14.763 [2024-04-27 00:58:07.424554] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.763 [2024-04-27 00:58:07.424682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.763 [2024-04-27 00:58:07.424698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.763 [2024-04-27 00:58:07.424706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.763 [2024-04-27 00:58:07.424712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.763 [2024-04-27 00:58:07.424729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.763 qpair failed and we were unable to recover it. 00:24:14.763 [2024-04-27 00:58:07.434663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.763 [2024-04-27 00:58:07.434791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.763 [2024-04-27 00:58:07.434807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.763 [2024-04-27 00:58:07.434814] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.763 [2024-04-27 00:58:07.434821] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.763 [2024-04-27 00:58:07.434837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.763 qpair failed and we were unable to recover it. 00:24:14.763 [2024-04-27 00:58:07.444677] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.763 [2024-04-27 00:58:07.444802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.763 [2024-04-27 00:58:07.444818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.763 [2024-04-27 00:58:07.444826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.763 [2024-04-27 00:58:07.444832] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.763 [2024-04-27 00:58:07.444848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.763 qpair failed and we were unable to recover it. 00:24:14.763 [2024-04-27 00:58:07.454751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:14.763 [2024-04-27 00:58:07.454889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:14.763 [2024-04-27 00:58:07.454905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:14.763 [2024-04-27 00:58:07.454913] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:14.763 [2024-04-27 00:58:07.454919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:14.763 [2024-04-27 00:58:07.454935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:14.763 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.464732] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.464886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.464902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.044 [2024-04-27 00:58:07.464910] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.044 [2024-04-27 00:58:07.464916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.044 [2024-04-27 00:58:07.464932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.044 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.474750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.474882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.474898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.044 [2024-04-27 00:58:07.474905] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.044 [2024-04-27 00:58:07.474911] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.044 [2024-04-27 00:58:07.474927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.044 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.484736] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.484864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.484880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.044 [2024-04-27 00:58:07.484887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.044 [2024-04-27 00:58:07.484893] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.044 [2024-04-27 00:58:07.484910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.044 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.494836] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.494963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.494979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.044 [2024-04-27 00:58:07.494987] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.044 [2024-04-27 00:58:07.494993] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.044 [2024-04-27 00:58:07.495009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.044 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.504851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.504977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.504993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.044 [2024-04-27 00:58:07.505001] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.044 [2024-04-27 00:58:07.505010] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.044 [2024-04-27 00:58:07.505026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.044 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.514935] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.515058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.515080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.044 [2024-04-27 00:58:07.515088] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.044 [2024-04-27 00:58:07.515095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.044 [2024-04-27 00:58:07.515111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.044 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.524930] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.525061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.525082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.044 [2024-04-27 00:58:07.525089] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.044 [2024-04-27 00:58:07.525095] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.044 [2024-04-27 00:58:07.525112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.044 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.534956] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.535085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.535101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.044 [2024-04-27 00:58:07.535108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.044 [2024-04-27 00:58:07.535115] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.044 [2024-04-27 00:58:07.535131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.044 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.544964] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.545095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.545112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.044 [2024-04-27 00:58:07.545119] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.044 [2024-04-27 00:58:07.545126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.044 [2024-04-27 00:58:07.545142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.044 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.555010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.555144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.555161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.044 [2024-04-27 00:58:07.555168] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.044 [2024-04-27 00:58:07.555174] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.044 [2024-04-27 00:58:07.555190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.044 qpair failed and we were unable to recover it. 00:24:15.044 [2024-04-27 00:58:07.565043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.044 [2024-04-27 00:58:07.565174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.044 [2024-04-27 00:58:07.565190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.565197] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.565203] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.565220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.575075] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.575201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.575218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.575225] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.575231] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.575247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.585088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.585216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.585232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.585239] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.585245] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.585261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.595113] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.595246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.595262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.595273] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.595279] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.595296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.605182] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.605308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.605324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.605331] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.605338] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.605354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.615234] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.615380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.615396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.615404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.615410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.615425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.625189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.625315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.625331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.625338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.625344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.625360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.635247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.635378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.635394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.635401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.635407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.635424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.645215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.645342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.645359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.645367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.645374] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.645390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.655236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.655370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.655385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.655393] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.655399] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.655416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.665311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.665434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.665450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.665458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.665464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.665480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.675349] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.675476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.675493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.675500] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.675507] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.675524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.045 [2024-04-27 00:58:07.685380] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.045 [2024-04-27 00:58:07.685509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.045 [2024-04-27 00:58:07.685525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.045 [2024-04-27 00:58:07.685535] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.045 [2024-04-27 00:58:07.685541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.045 [2024-04-27 00:58:07.685557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.045 qpair failed and we were unable to recover it. 00:24:15.046 [2024-04-27 00:58:07.695411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.046 [2024-04-27 00:58:07.695539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.046 [2024-04-27 00:58:07.695555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.046 [2024-04-27 00:58:07.695562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.046 [2024-04-27 00:58:07.695569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.046 [2024-04-27 00:58:07.695585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.046 qpair failed and we were unable to recover it. 00:24:15.046 [2024-04-27 00:58:07.705413] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.046 [2024-04-27 00:58:07.705539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.046 [2024-04-27 00:58:07.705556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.046 [2024-04-27 00:58:07.705563] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.046 [2024-04-27 00:58:07.705570] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.046 [2024-04-27 00:58:07.705586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.046 qpair failed and we were unable to recover it. 00:24:15.046 [2024-04-27 00:58:07.715479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.046 [2024-04-27 00:58:07.715613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.046 [2024-04-27 00:58:07.715629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.046 [2024-04-27 00:58:07.715636] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.046 [2024-04-27 00:58:07.715643] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.046 [2024-04-27 00:58:07.715659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.046 qpair failed and we were unable to recover it. 00:24:15.046 [2024-04-27 00:58:07.725502] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.046 [2024-04-27 00:58:07.725630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.046 [2024-04-27 00:58:07.725648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.046 [2024-04-27 00:58:07.725655] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.046 [2024-04-27 00:58:07.725661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.046 [2024-04-27 00:58:07.725678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.046 qpair failed and we were unable to recover it. 00:24:15.046 [2024-04-27 00:58:07.735534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.046 [2024-04-27 00:58:07.735659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.046 [2024-04-27 00:58:07.735676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.046 [2024-04-27 00:58:07.735683] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.046 [2024-04-27 00:58:07.735689] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.046 [2024-04-27 00:58:07.735705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.046 qpair failed and we were unable to recover it. 00:24:15.306 [2024-04-27 00:58:07.745536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.306 [2024-04-27 00:58:07.745664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.745681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.745688] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.745694] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.745710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.755504] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.755634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.755651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.755658] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.755664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.755680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.765617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.765741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.765757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.765764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.765770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.765786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.775637] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.775762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.775781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.775789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.775795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.775811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.785643] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.785772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.785788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.785795] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.785801] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.785817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.795690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.795822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.795838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.795845] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.795851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.795867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.805774] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.805901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.805918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.805925] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.805932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.805948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.815758] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.815876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.815892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.815900] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.815906] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.815924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.825812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.825940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.825957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.825964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.825970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.825986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.835812] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.835942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.835959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.835966] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.835973] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.835989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.845782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.845912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.845929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.845936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.845942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.845958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.855898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.856032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.856048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.856055] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.856061] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.856085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.865877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.307 [2024-04-27 00:58:07.866002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.307 [2024-04-27 00:58:07.866022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.307 [2024-04-27 00:58:07.866029] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.307 [2024-04-27 00:58:07.866036] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.307 [2024-04-27 00:58:07.866051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.307 qpair failed and we were unable to recover it. 00:24:15.307 [2024-04-27 00:58:07.875932] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.876068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.876092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.876099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.876106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.876124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.885959] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.886090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.886107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.886115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.886121] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.886137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.895991] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.896119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.896135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.896143] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.896149] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.896165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.905989] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.906160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.906177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.906184] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.906193] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.906210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.916228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.916361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.916378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.916385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.916392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.916408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.926043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.926169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.926185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.926192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.926198] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.926215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.936099] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.936227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.936243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.936250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.936256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.936273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.946116] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.946244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.946260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.946267] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.946273] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.946289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.956155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.956287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.956303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.956310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.956316] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.956332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.966189] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.966313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.966329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.966336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.966343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.966359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.976210] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.976336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.976353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.976360] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.976366] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.976383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.986226] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.986351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.986368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.986375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.986381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.986397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.308 [2024-04-27 00:58:07.996300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.308 [2024-04-27 00:58:07.996430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.308 [2024-04-27 00:58:07.996447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.308 [2024-04-27 00:58:07.996458] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.308 [2024-04-27 00:58:07.996464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.308 [2024-04-27 00:58:07.996481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.308 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.006304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.006447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.006464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.006471] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.006477] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.006493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.016288] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.016458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.016474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.016481] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.016488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.016505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.026351] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.026479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.026496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.026504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.026510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.026526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.036564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.036694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.036710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.036718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.036724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.036740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.046448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.046615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.046631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.046638] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.046644] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.046661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.056366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.056504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.056521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.056528] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.056535] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.056551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.066472] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.066629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.066646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.066653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.066660] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.066676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.076500] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.076628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.076646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.076653] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.076659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.076675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.086564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.086700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.086718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.086731] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.086737] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.086753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.096562] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.096683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.096699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.096707] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.096713] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.096729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.106571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.106697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.106713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.106721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.106727] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.106743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.116599] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.116728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.116745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.116752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.116758] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.569 [2024-04-27 00:58:08.116775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.569 qpair failed and we were unable to recover it. 00:24:15.569 [2024-04-27 00:58:08.126660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.569 [2024-04-27 00:58:08.126784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.569 [2024-04-27 00:58:08.126800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.569 [2024-04-27 00:58:08.126807] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.569 [2024-04-27 00:58:08.126813] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.126829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.136678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.136801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.136818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.136825] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.136831] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.136847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.146684] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.146811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.146827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.146834] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.146840] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.146856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.156664] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.156788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.156804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.156812] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.156819] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.156835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.166786] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.166911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.166927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.166935] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.166941] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.166957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.176788] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.176917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.176937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.176944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.176950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.176966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.186821] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.186947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.186964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.186972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.186978] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.186995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.196857] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.196985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.197002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.197009] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.197015] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.197032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.206904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.207034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.207050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.207057] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.207063] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.207086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.216905] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.217035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.217052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.217059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.217065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.217091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.226917] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.227041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.227057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.227065] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.227078] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.227096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.236898] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.237029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.237045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.237053] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.237059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.237081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.246992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.247122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.247138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.247146] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.247152] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.247168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.570 [2024-04-27 00:58:08.257021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.570 [2024-04-27 00:58:08.257156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.570 [2024-04-27 00:58:08.257173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.570 [2024-04-27 00:58:08.257180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.570 [2024-04-27 00:58:08.257186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.570 [2024-04-27 00:58:08.257203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.570 qpair failed and we were unable to recover it. 00:24:15.832 [2024-04-27 00:58:08.267034] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.832 [2024-04-27 00:58:08.267331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.832 [2024-04-27 00:58:08.267351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.832 [2024-04-27 00:58:08.267358] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.832 [2024-04-27 00:58:08.267365] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.832 [2024-04-27 00:58:08.267381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.832 qpair failed and we were unable to recover it. 00:24:15.832 [2024-04-27 00:58:08.277078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.832 [2024-04-27 00:58:08.277204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.832 [2024-04-27 00:58:08.277221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.832 [2024-04-27 00:58:08.277228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.832 [2024-04-27 00:58:08.277234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.832 [2024-04-27 00:58:08.277250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.832 qpair failed and we were unable to recover it. 00:24:15.832 [2024-04-27 00:58:08.287112] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.832 [2024-04-27 00:58:08.287238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.832 [2024-04-27 00:58:08.287254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.832 [2024-04-27 00:58:08.287261] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.832 [2024-04-27 00:58:08.287267] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.832 [2024-04-27 00:58:08.287283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.832 qpair failed and we were unable to recover it. 00:24:15.832 [2024-04-27 00:58:08.297128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.832 [2024-04-27 00:58:08.297256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.832 [2024-04-27 00:58:08.297273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.832 [2024-04-27 00:58:08.297280] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.832 [2024-04-27 00:58:08.297286] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.832 [2024-04-27 00:58:08.297303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.832 qpair failed and we were unable to recover it. 00:24:15.832 [2024-04-27 00:58:08.307087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.832 [2024-04-27 00:58:08.307213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.832 [2024-04-27 00:58:08.307230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.832 [2024-04-27 00:58:08.307237] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.832 [2024-04-27 00:58:08.307246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.832 [2024-04-27 00:58:08.307262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.832 qpair failed and we were unable to recover it. 00:24:15.832 [2024-04-27 00:58:08.317220] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.832 [2024-04-27 00:58:08.317351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.832 [2024-04-27 00:58:08.317368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.832 [2024-04-27 00:58:08.317375] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.832 [2024-04-27 00:58:08.317384] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.832 [2024-04-27 00:58:08.317401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.832 qpair failed and we were unable to recover it. 00:24:15.832 [2024-04-27 00:58:08.327202] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.832 [2024-04-27 00:58:08.327328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.832 [2024-04-27 00:58:08.327344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.832 [2024-04-27 00:58:08.327352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.832 [2024-04-27 00:58:08.327358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.832 [2024-04-27 00:58:08.327374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.832 qpair failed and we were unable to recover it. 00:24:15.832 [2024-04-27 00:58:08.337215] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.832 [2024-04-27 00:58:08.337374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.832 [2024-04-27 00:58:08.337389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.832 [2024-04-27 00:58:08.337397] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.832 [2024-04-27 00:58:08.337402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.832 [2024-04-27 00:58:08.337420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.832 qpair failed and we were unable to recover it. 00:24:15.832 [2024-04-27 00:58:08.347279] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.347410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.347426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.347433] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.347439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.347456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.357244] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.357373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.357389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.357396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.357403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.357419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.367303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.367431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.367449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.367456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.367463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.367480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.377465] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.377597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.377613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.377620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.377627] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.377643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.387373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.387502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.387518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.387526] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.387532] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.387548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.397437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.397571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.397587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.397595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.397604] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.397620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.407463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.407593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.407612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.407620] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.407626] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.407643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.417416] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.417545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.417564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.417571] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.417577] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.417594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.427548] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.427674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.427691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.427699] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.427705] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.427722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.437509] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.437641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.437658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.437665] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.437672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.437688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.447497] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.447628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.447644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.447652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.447658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.447674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.457517] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.457642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.457658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.457666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.457672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.457688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.467606] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.467731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.467747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.467754] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.833 [2024-04-27 00:58:08.467760] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.833 [2024-04-27 00:58:08.467777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.833 qpair failed and we were unable to recover it. 00:24:15.833 [2024-04-27 00:58:08.477696] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.833 [2024-04-27 00:58:08.477839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.833 [2024-04-27 00:58:08.477856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.833 [2024-04-27 00:58:08.477863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.834 [2024-04-27 00:58:08.477869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.834 [2024-04-27 00:58:08.477886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.834 qpair failed and we were unable to recover it. 00:24:15.834 [2024-04-27 00:58:08.487660] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.834 [2024-04-27 00:58:08.487783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.834 [2024-04-27 00:58:08.487800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.834 [2024-04-27 00:58:08.487810] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.834 [2024-04-27 00:58:08.487816] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.834 [2024-04-27 00:58:08.487831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.834 qpair failed and we were unable to recover it. 00:24:15.834 [2024-04-27 00:58:08.497642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.834 [2024-04-27 00:58:08.497766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.834 [2024-04-27 00:58:08.497782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.834 [2024-04-27 00:58:08.497789] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.834 [2024-04-27 00:58:08.497795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.834 [2024-04-27 00:58:08.497812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.834 qpair failed and we were unable to recover it. 00:24:15.834 [2024-04-27 00:58:08.507731] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.834 [2024-04-27 00:58:08.507860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.834 [2024-04-27 00:58:08.507879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.834 [2024-04-27 00:58:08.507887] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.834 [2024-04-27 00:58:08.507894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.834 [2024-04-27 00:58:08.507912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.834 qpair failed and we were unable to recover it. 00:24:15.834 [2024-04-27 00:58:08.517753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:15.834 [2024-04-27 00:58:08.517934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:15.834 [2024-04-27 00:58:08.517950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:15.834 [2024-04-27 00:58:08.517958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:15.834 [2024-04-27 00:58:08.517965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:15.834 [2024-04-27 00:58:08.517981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:15.834 qpair failed and we were unable to recover it. 00:24:16.094 [2024-04-27 00:58:08.527775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.094 [2024-04-27 00:58:08.527899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.094 [2024-04-27 00:58:08.527916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.094 [2024-04-27 00:58:08.527923] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.094 [2024-04-27 00:58:08.527930] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.094 [2024-04-27 00:58:08.527947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.094 qpair failed and we were unable to recover it. 00:24:16.094 [2024-04-27 00:58:08.537802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.094 [2024-04-27 00:58:08.537931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.094 [2024-04-27 00:58:08.537947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.094 [2024-04-27 00:58:08.537954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.094 [2024-04-27 00:58:08.537960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.094 [2024-04-27 00:58:08.537977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.094 qpair failed and we were unable to recover it. 00:24:16.094 [2024-04-27 00:58:08.547794] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.094 [2024-04-27 00:58:08.547920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.094 [2024-04-27 00:58:08.547936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.094 [2024-04-27 00:58:08.547944] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.094 [2024-04-27 00:58:08.547950] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.094 [2024-04-27 00:58:08.547966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.094 qpair failed and we were unable to recover it. 00:24:16.094 [2024-04-27 00:58:08.557831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.094 [2024-04-27 00:58:08.558132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.094 [2024-04-27 00:58:08.558149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.094 [2024-04-27 00:58:08.558157] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.094 [2024-04-27 00:58:08.558164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.094 [2024-04-27 00:58:08.558181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.094 qpair failed and we were unable to recover it. 00:24:16.094 [2024-04-27 00:58:08.567906] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.094 [2024-04-27 00:58:08.568035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.094 [2024-04-27 00:58:08.568051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.094 [2024-04-27 00:58:08.568059] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.094 [2024-04-27 00:58:08.568065] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.094 [2024-04-27 00:58:08.568088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.094 qpair failed and we were unable to recover it. 00:24:16.094 [2024-04-27 00:58:08.577958] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.094 [2024-04-27 00:58:08.578102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.094 [2024-04-27 00:58:08.578122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.094 [2024-04-27 00:58:08.578129] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.094 [2024-04-27 00:58:08.578136] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.094 [2024-04-27 00:58:08.578152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.094 qpair failed and we were unable to recover it. 00:24:16.094 [2024-04-27 00:58:08.587960] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.094 [2024-04-27 00:58:08.588093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.094 [2024-04-27 00:58:08.588110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.094 [2024-04-27 00:58:08.588117] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.094 [2024-04-27 00:58:08.588123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.094 [2024-04-27 00:58:08.588139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.094 qpair failed and we were unable to recover it. 00:24:16.094 [2024-04-27 00:58:08.598010] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.094 [2024-04-27 00:58:08.598143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.094 [2024-04-27 00:58:08.598159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.094 [2024-04-27 00:58:08.598166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.094 [2024-04-27 00:58:08.598173] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.094 [2024-04-27 00:58:08.598189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.094 qpair failed and we were unable to recover it. 00:24:16.094 [2024-04-27 00:58:08.608100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.094 [2024-04-27 00:58:08.608230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.094 [2024-04-27 00:58:08.608246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.094 [2024-04-27 00:58:08.608254] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.094 [2024-04-27 00:58:08.608260] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.094 [2024-04-27 00:58:08.608275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.094 qpair failed and we were unable to recover it. 00:24:16.094 [2024-04-27 00:58:08.618078] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.094 [2024-04-27 00:58:08.618200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.094 [2024-04-27 00:58:08.618216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.618223] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.618229] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.618248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.628096] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.628221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.628237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.628244] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.628250] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.628266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.638127] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.638269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.638286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.638293] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.638300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.638316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.648130] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.648281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.648298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.648307] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.648314] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.648330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.658192] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.658315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.658331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.658338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.658344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.658360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.668240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.668368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.668389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.668396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.668403] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.668420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.678174] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.678301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.678317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.678325] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.678331] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.678347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.688302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.688442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.688458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.688466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.688471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.688487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.698305] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.698433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.698449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.698456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.698463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.698479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.708311] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.708437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.708454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.708461] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.708470] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.708486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.718363] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.718487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.718504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.718511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.718517] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.718533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.728400] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.728563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.728579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.728586] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.728593] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.728609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.738411] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.738536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.738553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.738560] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.738566] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.738583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.095 qpair failed and we were unable to recover it. 00:24:16.095 [2024-04-27 00:58:08.748429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.095 [2024-04-27 00:58:08.748556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.095 [2024-04-27 00:58:08.748575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.095 [2024-04-27 00:58:08.748582] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.095 [2024-04-27 00:58:08.748588] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.095 [2024-04-27 00:58:08.748605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.096 qpair failed and we were unable to recover it. 00:24:16.096 [2024-04-27 00:58:08.758483] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.096 [2024-04-27 00:58:08.758614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.096 [2024-04-27 00:58:08.758631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.096 [2024-04-27 00:58:08.758639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.096 [2024-04-27 00:58:08.758645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7960000b90 00:24:16.096 [2024-04-27 00:58:08.758661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.096 qpair failed and we were unable to recover it. 00:24:16.096 [2024-04-27 00:58:08.768503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.096 [2024-04-27 00:58:08.768704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.096 [2024-04-27 00:58:08.768734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.096 [2024-04-27 00:58:08.768746] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.096 [2024-04-27 00:58:08.768756] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:16.096 [2024-04-27 00:58:08.768780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:16.096 qpair failed and we were unable to recover it. 00:24:16.096 [2024-04-27 00:58:08.778543] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.096 [2024-04-27 00:58:08.778668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.096 [2024-04-27 00:58:08.778686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.096 [2024-04-27 00:58:08.778694] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.096 [2024-04-27 00:58:08.778700] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:16.096 [2024-04-27 00:58:08.778717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:16.096 qpair failed and we were unable to recover it. 00:24:16.355 [2024-04-27 00:58:08.788535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.355 [2024-04-27 00:58:08.788666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.355 [2024-04-27 00:58:08.788684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.355 [2024-04-27 00:58:08.788692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.355 [2024-04-27 00:58:08.788699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:16.355 [2024-04-27 00:58:08.788715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:16.355 qpair failed and we were unable to recover it. 00:24:16.355 [2024-04-27 00:58:08.798558] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.355 [2024-04-27 00:58:08.798685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.355 [2024-04-27 00:58:08.798702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.356 [2024-04-27 00:58:08.798709] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.356 [2024-04-27 00:58:08.798719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7958000b90 00:24:16.356 [2024-04-27 00:58:08.798735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:16.356 qpair failed and we were unable to recover it. 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Read completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 Write completed with error (sct=0, sc=8) 00:24:16.356 starting I/O failed 00:24:16.356 [2024-04-27 00:58:08.799124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.356 [2024-04-27 00:58:08.808634] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.356 [2024-04-27 00:58:08.808808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.356 [2024-04-27 00:58:08.808837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.356 [2024-04-27 00:58:08.808848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.356 [2024-04-27 00:58:08.808858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:16.356 [2024-04-27 00:58:08.808881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:16.356 qpair failed and we were unable to recover it. 00:24:16.356 [2024-04-27 00:58:08.808907] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7377e0 (9): Bad file descriptor 00:24:16.356 [2024-04-27 00:58:08.818652] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.356 [2024-04-27 00:58:08.818861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.356 [2024-04-27 00:58:08.818887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.356 [2024-04-27 00:58:08.818899] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.356 [2024-04-27 00:58:08.818912] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7968000b90 00:24:16.356 [2024-04-27 00:58:08.818936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.356 qpair failed and we were unable to recover it. 00:24:16.356 [2024-04-27 00:58:08.828749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.356 [2024-04-27 00:58:08.828918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.356 [2024-04-27 00:58:08.828941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.356 [2024-04-27 00:58:08.828952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.356 [2024-04-27 00:58:08.828962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x729cf0 00:24:16.356 [2024-04-27 00:58:08.828984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:16.356 qpair failed and we were unable to recover it. 00:24:16.356 [2024-04-27 00:58:08.838726] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:16.356 [2024-04-27 00:58:08.838919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:16.356 [2024-04-27 00:58:08.838943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:16.356 [2024-04-27 00:58:08.838955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:16.356 [2024-04-27 00:58:08.838964] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7968000b90 00:24:16.356 [2024-04-27 00:58:08.838987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.356 qpair failed and we were unable to recover it. 00:24:16.356 Initializing NVMe Controllers 00:24:16.356 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:16.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:16.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:16.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:16.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:16.356 Initialization complete. Launching workers. 00:24:16.356 Starting thread on core 1 00:24:16.356 Starting thread on core 2 00:24:16.356 Starting thread on core 3 00:24:16.356 Starting thread on core 0 00:24:16.356 00:58:08 -- host/target_disconnect.sh@59 -- # sync 00:24:16.356 00:24:16.356 real 0m11.260s 00:24:16.356 user 0m20.811s 00:24:16.356 sys 0m4.162s 00:24:16.356 00:58:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:16.356 00:58:08 -- common/autotest_common.sh@10 -- # set +x 00:24:16.356 ************************************ 00:24:16.356 END TEST nvmf_target_disconnect_tc2 00:24:16.356 ************************************ 00:24:16.356 00:58:08 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:24:16.356 00:58:08 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:16.356 00:58:08 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:24:16.356 00:58:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:16.356 00:58:08 -- nvmf/common.sh@117 -- # sync 00:24:16.356 00:58:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.356 00:58:08 -- nvmf/common.sh@120 -- # set +e 00:24:16.356 00:58:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.356 00:58:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.356 rmmod nvme_tcp 00:24:16.356 rmmod nvme_fabrics 00:24:16.356 rmmod nvme_keyring 00:24:16.356 00:58:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.356 00:58:08 -- nvmf/common.sh@124 -- # set -e 00:24:16.356 00:58:08 -- nvmf/common.sh@125 -- # return 0 00:24:16.356 00:58:08 -- nvmf/common.sh@478 -- # '[' -n 1816419 ']' 00:24:16.356 00:58:08 -- nvmf/common.sh@479 -- # killprocess 1816419 00:24:16.356 00:58:08 -- common/autotest_common.sh@936 -- # '[' -z 1816419 ']' 00:24:16.356 00:58:08 -- common/autotest_common.sh@940 -- # kill -0 1816419 00:24:16.356 00:58:08 -- common/autotest_common.sh@941 -- # uname 00:24:16.356 00:58:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:16.356 00:58:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1816419 00:24:16.356 00:58:08 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:24:16.356 00:58:08 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:24:16.356 00:58:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1816419' 00:24:16.356 killing process with pid 1816419 00:24:16.356 00:58:08 -- common/autotest_common.sh@955 -- # kill 1816419 00:24:16.356 00:58:08 -- common/autotest_common.sh@960 -- # wait 1816419 00:24:16.616 00:58:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:16.616 00:58:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:16.616 00:58:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:16.616 00:58:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.616 00:58:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.616 00:58:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.616 00:58:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.616 00:58:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.153 00:58:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.153 00:24:19.153 real 0m19.484s 00:24:19.153 user 0m47.920s 00:24:19.153 sys 0m8.579s 00:24:19.153 00:58:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:19.153 00:58:11 -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 ************************************ 00:24:19.153 END TEST nvmf_target_disconnect 00:24:19.153 ************************************ 00:24:19.153 00:58:11 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:24:19.153 00:58:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:19.153 00:58:11 -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 00:58:11 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:24:19.153 00:24:19.153 real 18m6.121s 00:24:19.153 user 38m23.210s 00:24:19.153 sys 5m44.892s 00:24:19.153 00:58:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:19.153 00:58:11 -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 ************************************ 00:24:19.153 END TEST nvmf_tcp 00:24:19.153 ************************************ 00:24:19.153 00:58:11 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:24:19.153 00:58:11 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:19.153 00:58:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:19.153 00:58:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:19.153 00:58:11 -- common/autotest_common.sh@10 -- # set +x 00:24:19.153 ************************************ 00:24:19.153 START TEST spdkcli_nvmf_tcp 00:24:19.153 ************************************ 00:24:19.153 00:58:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:19.153 * Looking for test storage... 00:24:19.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:24:19.153 00:58:11 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:24:19.153 00:58:11 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:19.153 00:58:11 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:24:19.153 00:58:11 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.153 00:58:11 -- nvmf/common.sh@7 -- # uname -s 00:24:19.153 00:58:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.153 00:58:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.153 00:58:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.153 00:58:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.153 00:58:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.153 00:58:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.153 00:58:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.153 00:58:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.153 00:58:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.153 00:58:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.154 00:58:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:19.154 00:58:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:19.154 00:58:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.154 00:58:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.154 00:58:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.154 00:58:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.154 00:58:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.154 00:58:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.154 00:58:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.154 00:58:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.154 00:58:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.154 00:58:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.154 00:58:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.154 00:58:11 -- paths/export.sh@5 -- # export PATH 00:24:19.154 00:58:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.154 00:58:11 -- nvmf/common.sh@47 -- # : 0 00:24:19.154 00:58:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.154 00:58:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.154 00:58:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.154 00:58:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.154 00:58:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.154 00:58:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.154 00:58:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.154 00:58:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.154 00:58:11 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:19.154 00:58:11 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:19.154 00:58:11 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:19.154 00:58:11 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:19.154 00:58:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:19.154 00:58:11 -- common/autotest_common.sh@10 -- # set +x 00:24:19.154 00:58:11 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:19.154 00:58:11 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1818125 00:24:19.154 00:58:11 -- spdkcli/common.sh@34 -- # waitforlisten 1818125 00:24:19.154 00:58:11 -- common/autotest_common.sh@817 -- # '[' -z 1818125 ']' 00:24:19.154 00:58:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.154 00:58:11 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:19.154 00:58:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:19.154 00:58:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.154 00:58:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:19.154 00:58:11 -- common/autotest_common.sh@10 -- # set +x 00:24:19.154 [2024-04-27 00:58:11.696517] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:24:19.154 [2024-04-27 00:58:11.696561] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818125 ] 00:24:19.154 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.154 [2024-04-27 00:58:11.750314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:19.154 [2024-04-27 00:58:11.819937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.154 [2024-04-27 00:58:11.819940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.092 00:58:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:20.092 00:58:12 -- common/autotest_common.sh@850 -- # return 0 00:24:20.092 00:58:12 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:20.092 00:58:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:20.092 00:58:12 -- common/autotest_common.sh@10 -- # set +x 00:24:20.092 00:58:12 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:20.092 00:58:12 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:20.092 00:58:12 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:20.092 00:58:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:20.092 00:58:12 -- common/autotest_common.sh@10 -- # set +x 00:24:20.092 00:58:12 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:20.092 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:20.092 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:20.092 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:20.092 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:20.092 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:20.092 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:20.092 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:20.092 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:20.092 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:20.092 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:20.092 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:20.092 ' 00:24:20.351 [2024-04-27 00:58:12.870251] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:22.883 [2024-04-27 00:58:15.111828] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.820 [2024-04-27 00:58:16.396094] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:24:26.353 [2024-04-27 00:58:18.779372] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:24:28.258 [2024-04-27 00:58:20.833896] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:24:30.162 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:30.162 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:30.162 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:30.162 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:30.162 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:30.162 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:30.162 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:30.162 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:30.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:30.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:30.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:30.162 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:30.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:30.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:30.162 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:30.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:30.162 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:24:30.163 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:30.163 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:30.163 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:30.163 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:30.163 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:30.163 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:24:30.163 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:24:30.163 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:30.163 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:30.163 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:30.163 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:30.163 00:58:22 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:30.163 00:58:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.163 00:58:22 -- common/autotest_common.sh@10 -- # set +x 00:24:30.163 00:58:22 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:30.163 00:58:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:30.163 00:58:22 -- common/autotest_common.sh@10 -- # set +x 00:24:30.163 00:58:22 -- spdkcli/nvmf.sh@69 -- # check_match 00:24:30.163 00:58:22 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:30.429 00:58:22 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:30.429 00:58:22 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:30.429 00:58:22 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:30.429 00:58:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.429 00:58:22 -- common/autotest_common.sh@10 -- # set +x 00:24:30.429 00:58:22 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:30.429 00:58:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:30.429 00:58:22 -- common/autotest_common.sh@10 -- # set +x 00:24:30.429 00:58:22 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:30.429 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:30.429 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:30.429 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:30.429 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:24:30.429 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:24:30.429 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:30.429 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:30.429 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:30.429 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:30.429 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:30.429 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:30.429 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:30.429 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:30.429 ' 00:24:35.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:35.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:35.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:35.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:35.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:24:35.697 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:24:35.697 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:35.697 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:35.697 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:35.697 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:35.697 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:35.697 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:35.697 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:35.697 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:35.697 00:58:27 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:35.697 00:58:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:35.697 00:58:27 -- common/autotest_common.sh@10 -- # set +x 00:24:35.697 00:58:27 -- spdkcli/nvmf.sh@90 -- # killprocess 1818125 00:24:35.697 00:58:27 -- common/autotest_common.sh@936 -- # '[' -z 1818125 ']' 00:24:35.697 00:58:27 -- common/autotest_common.sh@940 -- # kill -0 1818125 00:24:35.697 00:58:27 -- common/autotest_common.sh@941 -- # uname 00:24:35.697 00:58:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:35.697 00:58:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1818125 00:24:35.697 00:58:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:35.697 00:58:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:35.697 00:58:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1818125' 00:24:35.697 killing process with pid 1818125 00:24:35.697 00:58:27 -- common/autotest_common.sh@955 -- # kill 1818125 00:24:35.697 [2024-04-27 00:58:27.999853] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:35.697 00:58:27 -- common/autotest_common.sh@960 -- # wait 1818125 00:24:35.697 00:58:28 -- spdkcli/nvmf.sh@1 -- # cleanup 00:24:35.697 00:58:28 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:24:35.697 00:58:28 -- spdkcli/common.sh@13 -- # '[' -n 1818125 ']' 00:24:35.697 00:58:28 -- spdkcli/common.sh@14 -- # killprocess 1818125 00:24:35.698 00:58:28 -- common/autotest_common.sh@936 -- # '[' -z 1818125 ']' 00:24:35.698 00:58:28 -- common/autotest_common.sh@940 -- # kill -0 1818125 00:24:35.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1818125) - No such process 00:24:35.698 00:58:28 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1818125 is not found' 00:24:35.698 Process with pid 1818125 is not found 00:24:35.698 00:58:28 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:24:35.698 00:58:28 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:24:35.698 00:58:28 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:24:35.698 00:24:35.698 real 0m16.685s 00:24:35.698 user 0m35.670s 00:24:35.698 sys 0m0.837s 00:24:35.698 00:58:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:35.698 00:58:28 -- common/autotest_common.sh@10 -- # set +x 00:24:35.698 ************************************ 00:24:35.698 END TEST spdkcli_nvmf_tcp 00:24:35.698 ************************************ 00:24:35.698 00:58:28 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:35.698 00:58:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:35.698 00:58:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:35.698 00:58:28 -- common/autotest_common.sh@10 -- # set +x 00:24:35.698 ************************************ 00:24:35.698 START TEST nvmf_identify_passthru 00:24:35.698 ************************************ 00:24:35.698 00:58:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:24:35.957 * Looking for test storage... 00:24:35.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:35.957 00:58:28 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.957 00:58:28 -- nvmf/common.sh@7 -- # uname -s 00:24:35.957 00:58:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.957 00:58:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.957 00:58:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.957 00:58:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.957 00:58:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.957 00:58:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.957 00:58:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.957 00:58:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.957 00:58:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.957 00:58:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.957 00:58:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:35.957 00:58:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:35.957 00:58:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.957 00:58:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.957 00:58:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.957 00:58:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.957 00:58:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.957 00:58:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.957 00:58:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.957 00:58:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.957 00:58:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.957 00:58:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.957 00:58:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.957 00:58:28 -- paths/export.sh@5 -- # export PATH 00:24:35.957 00:58:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.957 00:58:28 -- nvmf/common.sh@47 -- # : 0 00:24:35.957 00:58:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.957 00:58:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.957 00:58:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.957 00:58:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.957 00:58:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.957 00:58:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.957 00:58:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.957 00:58:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.957 00:58:28 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.957 00:58:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.957 00:58:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.957 00:58:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.957 00:58:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.957 00:58:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.957 00:58:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.957 00:58:28 -- paths/export.sh@5 -- # export PATH 00:24:35.957 00:58:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.957 00:58:28 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:24:35.957 00:58:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:35.957 00:58:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.957 00:58:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:35.957 00:58:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:35.957 00:58:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:35.957 00:58:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.957 00:58:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:35.957 00:58:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.957 00:58:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:35.957 00:58:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:35.957 00:58:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.957 00:58:28 -- common/autotest_common.sh@10 -- # set +x 00:24:41.236 00:58:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:41.236 00:58:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:41.236 00:58:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:41.236 00:58:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:41.236 00:58:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:41.236 00:58:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:41.236 00:58:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:41.236 00:58:33 -- nvmf/common.sh@295 -- # net_devs=() 00:24:41.236 00:58:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:41.236 00:58:33 -- nvmf/common.sh@296 -- # e810=() 00:24:41.236 00:58:33 -- nvmf/common.sh@296 -- # local -ga e810 00:24:41.236 00:58:33 -- nvmf/common.sh@297 -- # x722=() 00:24:41.236 00:58:33 -- nvmf/common.sh@297 -- # local -ga x722 00:24:41.236 00:58:33 -- nvmf/common.sh@298 -- # mlx=() 00:24:41.236 00:58:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:41.236 00:58:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.236 00:58:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:41.236 00:58:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:41.236 00:58:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:41.236 00:58:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.236 00:58:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:41.236 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:41.236 00:58:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:41.236 00:58:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:41.236 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:41.236 00:58:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:41.236 00:58:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.236 00:58:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.236 00:58:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:41.236 00:58:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.236 00:58:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:41.236 Found net devices under 0000:86:00.0: cvl_0_0 00:24:41.236 00:58:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.236 00:58:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:41.236 00:58:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.236 00:58:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:41.236 00:58:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.236 00:58:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:41.236 Found net devices under 0000:86:00.1: cvl_0_1 00:24:41.236 00:58:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.236 00:58:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:41.236 00:58:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:41.236 00:58:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:41.236 00:58:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.236 00:58:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.236 00:58:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.236 00:58:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:41.236 00:58:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.236 00:58:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.236 00:58:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:41.236 00:58:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.236 00:58:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.236 00:58:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:41.236 00:58:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:41.236 00:58:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.236 00:58:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.236 00:58:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.236 00:58:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.236 00:58:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:41.236 00:58:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.236 00:58:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.236 00:58:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.236 00:58:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:41.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:24:41.236 00:24:41.236 --- 10.0.0.2 ping statistics --- 00:24:41.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.236 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:24:41.236 00:58:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:24:41.236 00:24:41.236 --- 10.0.0.1 ping statistics --- 00:24:41.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.236 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:41.236 00:58:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.236 00:58:33 -- nvmf/common.sh@411 -- # return 0 00:24:41.236 00:58:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:41.236 00:58:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.236 00:58:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:41.236 00:58:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.236 00:58:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:41.236 00:58:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:41.236 00:58:33 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:24:41.236 00:58:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:41.236 00:58:33 -- common/autotest_common.sh@10 -- # set +x 00:24:41.236 00:58:33 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:24:41.236 00:58:33 -- common/autotest_common.sh@1510 -- # bdfs=() 00:24:41.236 00:58:33 -- common/autotest_common.sh@1510 -- # local bdfs 00:24:41.236 00:58:33 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:24:41.236 00:58:33 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:24:41.236 00:58:33 -- common/autotest_common.sh@1499 -- # bdfs=() 00:24:41.236 00:58:33 -- common/autotest_common.sh@1499 -- # local bdfs 00:24:41.236 00:58:33 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:41.236 00:58:33 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:41.236 00:58:33 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:24:41.236 00:58:33 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:24:41.236 00:58:33 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 00:24:41.236 00:58:33 -- common/autotest_common.sh@1513 -- # echo 0000:5e:00.0 00:24:41.236 00:58:33 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:24:41.236 00:58:33 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:24:41.236 00:58:33 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:24:41.236 00:58:33 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:24:41.236 00:58:33 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:24:41.236 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.436 00:58:37 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:24:45.436 00:58:37 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:24:45.436 00:58:37 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:24:45.436 00:58:37 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:24:45.436 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.635 00:58:42 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:24:49.635 00:58:42 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:24:49.635 00:58:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:49.635 00:58:42 -- common/autotest_common.sh@10 -- # set +x 00:24:49.635 00:58:42 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:24:49.635 00:58:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:49.635 00:58:42 -- common/autotest_common.sh@10 -- # set +x 00:24:49.635 00:58:42 -- target/identify_passthru.sh@31 -- # nvmfpid=1825170 00:24:49.635 00:58:42 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:49.635 00:58:42 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.635 00:58:42 -- target/identify_passthru.sh@35 -- # waitforlisten 1825170 00:24:49.635 00:58:42 -- common/autotest_common.sh@817 -- # '[' -z 1825170 ']' 00:24:49.635 00:58:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.635 00:58:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:49.636 00:58:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.636 00:58:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:49.636 00:58:42 -- common/autotest_common.sh@10 -- # set +x 00:24:49.636 [2024-04-27 00:58:42.191440] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:24:49.636 [2024-04-27 00:58:42.191488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.636 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.636 [2024-04-27 00:58:42.249009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:49.636 [2024-04-27 00:58:42.328191] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.636 [2024-04-27 00:58:42.328226] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.636 [2024-04-27 00:58:42.328233] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.636 [2024-04-27 00:58:42.328239] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.636 [2024-04-27 00:58:42.328244] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.636 [2024-04-27 00:58:42.328285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.636 [2024-04-27 00:58:42.328299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.636 [2024-04-27 00:58:42.328388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:49.636 [2024-04-27 00:58:42.328390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.571 00:58:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:50.571 00:58:42 -- common/autotest_common.sh@850 -- # return 0 00:24:50.571 00:58:42 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:24:50.571 00:58:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.571 00:58:42 -- common/autotest_common.sh@10 -- # set +x 00:24:50.571 INFO: Log level set to 20 00:24:50.571 INFO: Requests: 00:24:50.571 { 00:24:50.571 "jsonrpc": "2.0", 00:24:50.571 "method": "nvmf_set_config", 00:24:50.571 "id": 1, 00:24:50.571 "params": { 00:24:50.571 "admin_cmd_passthru": { 00:24:50.571 "identify_ctrlr": true 00:24:50.571 } 00:24:50.571 } 00:24:50.571 } 00:24:50.571 00:24:50.571 INFO: response: 00:24:50.571 { 00:24:50.571 "jsonrpc": "2.0", 00:24:50.571 "id": 1, 00:24:50.571 "result": true 00:24:50.571 } 00:24:50.571 00:24:50.571 00:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.571 00:58:43 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:24:50.571 00:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.571 00:58:43 -- common/autotest_common.sh@10 -- # set +x 00:24:50.571 INFO: Setting log level to 20 00:24:50.571 INFO: Setting log level to 20 00:24:50.571 INFO: Log level set to 20 00:24:50.571 INFO: Log level set to 20 00:24:50.571 INFO: Requests: 00:24:50.571 { 00:24:50.571 "jsonrpc": "2.0", 00:24:50.571 "method": "framework_start_init", 00:24:50.571 "id": 1 00:24:50.571 } 00:24:50.571 00:24:50.571 INFO: Requests: 00:24:50.571 { 00:24:50.571 "jsonrpc": "2.0", 00:24:50.571 "method": "framework_start_init", 00:24:50.571 "id": 1 00:24:50.571 } 00:24:50.571 00:24:50.571 [2024-04-27 00:58:43.092562] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:24:50.571 INFO: response: 00:24:50.571 { 00:24:50.571 "jsonrpc": "2.0", 00:24:50.571 "id": 1, 00:24:50.571 "result": true 00:24:50.571 } 00:24:50.571 00:24:50.571 INFO: response: 00:24:50.571 { 00:24:50.571 "jsonrpc": "2.0", 00:24:50.571 "id": 1, 00:24:50.571 "result": true 00:24:50.571 } 00:24:50.571 00:24:50.571 00:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.571 00:58:43 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:50.571 00:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.571 00:58:43 -- common/autotest_common.sh@10 -- # set +x 00:24:50.571 INFO: Setting log level to 40 00:24:50.571 INFO: Setting log level to 40 00:24:50.571 INFO: Setting log level to 40 00:24:50.571 [2024-04-27 00:58:43.106003] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.571 00:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.571 00:58:43 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:24:50.571 00:58:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:50.571 00:58:43 -- common/autotest_common.sh@10 -- # set +x 00:24:50.571 00:58:43 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:24:50.571 00:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.571 00:58:43 -- common/autotest_common.sh@10 -- # set +x 00:24:53.860 Nvme0n1 00:24:53.860 00:58:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.860 00:58:45 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:24:53.860 00:58:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.860 00:58:45 -- common/autotest_common.sh@10 -- # set +x 00:24:53.860 00:58:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.860 00:58:45 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:53.860 00:58:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.860 00:58:45 -- common/autotest_common.sh@10 -- # set +x 00:24:53.860 00:58:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.860 00:58:45 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:53.860 00:58:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.860 00:58:45 -- common/autotest_common.sh@10 -- # set +x 00:24:53.860 [2024-04-27 00:58:46.000835] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.860 00:58:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.860 00:58:46 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:24:53.860 00:58:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.860 00:58:46 -- common/autotest_common.sh@10 -- # set +x 00:24:53.860 [2024-04-27 00:58:46.008636] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:53.860 [ 00:24:53.860 { 00:24:53.860 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:53.860 "subtype": "Discovery", 00:24:53.860 "listen_addresses": [], 00:24:53.860 "allow_any_host": true, 00:24:53.860 "hosts": [] 00:24:53.860 }, 00:24:53.860 { 00:24:53.860 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.860 "subtype": "NVMe", 00:24:53.860 "listen_addresses": [ 00:24:53.860 { 00:24:53.860 "transport": "TCP", 00:24:53.860 "trtype": "TCP", 00:24:53.860 "adrfam": "IPv4", 00:24:53.860 "traddr": "10.0.0.2", 00:24:53.860 "trsvcid": "4420" 00:24:53.860 } 00:24:53.860 ], 00:24:53.860 "allow_any_host": true, 00:24:53.860 "hosts": [], 00:24:53.860 "serial_number": "SPDK00000000000001", 00:24:53.860 "model_number": "SPDK bdev Controller", 00:24:53.860 "max_namespaces": 1, 00:24:53.860 "min_cntlid": 1, 00:24:53.860 "max_cntlid": 65519, 00:24:53.860 "namespaces": [ 00:24:53.860 { 00:24:53.860 "nsid": 1, 00:24:53.860 "bdev_name": "Nvme0n1", 00:24:53.860 "name": "Nvme0n1", 00:24:53.860 "nguid": "E521298253A744F5AFECCA3669937610", 00:24:53.860 "uuid": "e5212982-53a7-44f5-afec-ca3669937610" 00:24:53.860 } 00:24:53.860 ] 00:24:53.860 } 00:24:53.860 ] 00:24:53.860 00:58:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.860 00:58:46 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:24:53.860 00:58:46 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:53.860 00:58:46 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:24:53.860 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.860 00:58:46 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:24:53.860 00:58:46 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:53.860 00:58:46 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:24:53.860 00:58:46 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:24:53.860 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.860 00:58:46 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:24:53.860 00:58:46 -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:24:53.860 00:58:46 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:24:53.860 00:58:46 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:53.860 00:58:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.860 00:58:46 -- common/autotest_common.sh@10 -- # set +x 00:24:53.860 00:58:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.860 00:58:46 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:24:53.860 00:58:46 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:24:53.860 00:58:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:53.860 00:58:46 -- nvmf/common.sh@117 -- # sync 00:24:53.860 00:58:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:53.860 00:58:46 -- nvmf/common.sh@120 -- # set +e 00:24:53.860 00:58:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:53.860 00:58:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:53.860 rmmod nvme_tcp 00:24:53.860 rmmod nvme_fabrics 00:24:53.860 rmmod nvme_keyring 00:24:53.860 00:58:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:53.860 00:58:46 -- nvmf/common.sh@124 -- # set -e 00:24:53.860 00:58:46 -- nvmf/common.sh@125 -- # return 0 00:24:53.860 00:58:46 -- nvmf/common.sh@478 -- # '[' -n 1825170 ']' 00:24:53.860 00:58:46 -- nvmf/common.sh@479 -- # killprocess 1825170 00:24:53.860 00:58:46 -- common/autotest_common.sh@936 -- # '[' -z 1825170 ']' 00:24:53.860 00:58:46 -- common/autotest_common.sh@940 -- # kill -0 1825170 00:24:53.860 00:58:46 -- common/autotest_common.sh@941 -- # uname 00:24:53.860 00:58:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:53.860 00:58:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1825170 00:24:53.860 00:58:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:53.860 00:58:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:53.860 00:58:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1825170' 00:24:53.860 killing process with pid 1825170 00:24:53.860 00:58:46 -- common/autotest_common.sh@955 -- # kill 1825170 00:24:53.860 [2024-04-27 00:58:46.517017] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:53.860 00:58:46 -- common/autotest_common.sh@960 -- # wait 1825170 00:24:55.765 00:58:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:55.765 00:58:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:55.765 00:58:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:55.765 00:58:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:55.765 00:58:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:55.765 00:58:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.765 00:58:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:55.765 00:58:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.672 00:58:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:57.672 00:24:57.672 real 0m21.696s 00:24:57.672 user 0m29.974s 00:24:57.672 sys 0m4.808s 00:24:57.672 00:58:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:57.672 00:58:50 -- common/autotest_common.sh@10 -- # set +x 00:24:57.672 ************************************ 00:24:57.672 END TEST nvmf_identify_passthru 00:24:57.672 ************************************ 00:24:57.672 00:58:50 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:24:57.672 00:58:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:57.672 00:58:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:57.672 00:58:50 -- common/autotest_common.sh@10 -- # set +x 00:24:57.672 ************************************ 00:24:57.672 START TEST nvmf_dif 00:24:57.672 ************************************ 00:24:57.672 00:58:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:24:57.672 * Looking for test storage... 00:24:57.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:57.672 00:58:50 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.672 00:58:50 -- nvmf/common.sh@7 -- # uname -s 00:24:57.672 00:58:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.672 00:58:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.672 00:58:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.672 00:58:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.672 00:58:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.672 00:58:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.672 00:58:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.672 00:58:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.672 00:58:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.672 00:58:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.672 00:58:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:57.672 00:58:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:57.672 00:58:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.672 00:58:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.672 00:58:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.672 00:58:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.672 00:58:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.672 00:58:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.672 00:58:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.672 00:58:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.672 00:58:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.672 00:58:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.672 00:58:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.672 00:58:50 -- paths/export.sh@5 -- # export PATH 00:24:57.672 00:58:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.672 00:58:50 -- nvmf/common.sh@47 -- # : 0 00:24:57.672 00:58:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.672 00:58:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.672 00:58:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.672 00:58:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.672 00:58:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.672 00:58:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.672 00:58:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.672 00:58:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.672 00:58:50 -- target/dif.sh@15 -- # NULL_META=16 00:24:57.672 00:58:50 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:57.672 00:58:50 -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:57.672 00:58:50 -- target/dif.sh@15 -- # NULL_DIF=1 00:24:57.936 00:58:50 -- target/dif.sh@135 -- # nvmftestinit 00:24:57.937 00:58:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:57.937 00:58:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.937 00:58:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:57.937 00:58:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:57.937 00:58:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:57.937 00:58:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.937 00:58:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:57.937 00:58:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.937 00:58:50 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:57.937 00:58:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:57.937 00:58:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.937 00:58:50 -- common/autotest_common.sh@10 -- # set +x 00:25:03.256 00:58:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:03.256 00:58:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:03.256 00:58:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:03.256 00:58:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:03.257 00:58:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:03.257 00:58:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:03.257 00:58:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:03.257 00:58:55 -- nvmf/common.sh@295 -- # net_devs=() 00:25:03.257 00:58:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:03.257 00:58:55 -- nvmf/common.sh@296 -- # e810=() 00:25:03.257 00:58:55 -- nvmf/common.sh@296 -- # local -ga e810 00:25:03.257 00:58:55 -- nvmf/common.sh@297 -- # x722=() 00:25:03.257 00:58:55 -- nvmf/common.sh@297 -- # local -ga x722 00:25:03.257 00:58:55 -- nvmf/common.sh@298 -- # mlx=() 00:25:03.257 00:58:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:03.257 00:58:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.257 00:58:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:03.257 00:58:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:03.257 00:58:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:03.257 00:58:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.257 00:58:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:03.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:03.257 00:58:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:03.257 00:58:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:03.257 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:03.257 00:58:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:03.257 00:58:55 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.257 00:58:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.257 00:58:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:03.257 00:58:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.257 00:58:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:03.257 Found net devices under 0000:86:00.0: cvl_0_0 00:25:03.257 00:58:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.257 00:58:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:03.257 00:58:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.257 00:58:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:03.257 00:58:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.257 00:58:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:03.257 Found net devices under 0000:86:00.1: cvl_0_1 00:25:03.257 00:58:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.257 00:58:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:03.257 00:58:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:03.257 00:58:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:03.257 00:58:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:03.257 00:58:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.257 00:58:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.257 00:58:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.257 00:58:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:03.257 00:58:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.257 00:58:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.257 00:58:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:03.257 00:58:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.257 00:58:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.257 00:58:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:03.257 00:58:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:03.257 00:58:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.257 00:58:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.257 00:58:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.257 00:58:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.257 00:58:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:03.257 00:58:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.257 00:58:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.257 00:58:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.257 00:58:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:03.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:25:03.257 00:25:03.257 --- 10.0.0.2 ping statistics --- 00:25:03.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.257 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:25:03.257 00:58:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:25:03.257 00:25:03.257 --- 10.0.0.1 ping statistics --- 00:25:03.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.257 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:25:03.257 00:58:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.257 00:58:55 -- nvmf/common.sh@411 -- # return 0 00:25:03.257 00:58:55 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:25:03.257 00:58:55 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:05.792 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:05.792 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:05.792 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:05.792 00:58:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.792 00:58:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:05.792 00:58:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:05.792 00:58:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.792 00:58:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:05.792 00:58:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:05.792 00:58:58 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:05.792 00:58:58 -- target/dif.sh@137 -- # nvmfappstart 00:25:05.792 00:58:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:05.792 00:58:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:05.792 00:58:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.792 00:58:58 -- nvmf/common.sh@470 -- # nvmfpid=1830856 00:25:05.792 00:58:58 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:05.792 00:58:58 -- nvmf/common.sh@471 -- # waitforlisten 1830856 00:25:05.792 00:58:58 -- common/autotest_common.sh@817 -- # '[' -z 1830856 ']' 00:25:05.792 00:58:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.792 00:58:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:05.792 00:58:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.792 00:58:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:05.792 00:58:58 -- common/autotest_common.sh@10 -- # set +x 00:25:05.792 [2024-04-27 00:58:58.374994] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:05.792 [2024-04-27 00:58:58.375036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.792 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.792 [2024-04-27 00:58:58.431298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.051 [2024-04-27 00:58:58.510853] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.051 [2024-04-27 00:58:58.510882] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.051 [2024-04-27 00:58:58.510890] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.051 [2024-04-27 00:58:58.510896] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.051 [2024-04-27 00:58:58.510901] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.051 [2024-04-27 00:58:58.510921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.618 00:58:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:06.618 00:58:59 -- common/autotest_common.sh@850 -- # return 0 00:25:06.618 00:58:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:06.618 00:58:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:06.618 00:58:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.618 00:58:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.618 00:58:59 -- target/dif.sh@139 -- # create_transport 00:25:06.618 00:58:59 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:06.618 00:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.618 00:58:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.618 [2024-04-27 00:58:59.212957] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.618 00:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.618 00:58:59 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:06.618 00:58:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:06.618 00:58:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:06.618 00:58:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.877 ************************************ 00:25:06.877 START TEST fio_dif_1_default 00:25:06.877 ************************************ 00:25:06.877 00:58:59 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:25:06.877 00:58:59 -- target/dif.sh@86 -- # create_subsystems 0 00:25:06.877 00:58:59 -- target/dif.sh@28 -- # local sub 00:25:06.877 00:58:59 -- target/dif.sh@30 -- # for sub in "$@" 00:25:06.877 00:58:59 -- target/dif.sh@31 -- # create_subsystem 0 00:25:06.877 00:58:59 -- target/dif.sh@18 -- # local sub_id=0 00:25:06.877 00:58:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:06.877 00:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.877 00:58:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.877 bdev_null0 00:25:06.877 00:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.877 00:58:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:06.877 00:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.877 00:58:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.877 00:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.877 00:58:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:06.877 00:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.877 00:58:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.877 00:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.877 00:58:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:06.877 00:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.877 00:58:59 -- common/autotest_common.sh@10 -- # set +x 00:25:06.877 [2024-04-27 00:58:59.381511] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.877 00:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.877 00:58:59 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:06.877 00:58:59 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:06.877 00:58:59 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:06.877 00:58:59 -- nvmf/common.sh@521 -- # config=() 00:25:06.877 00:58:59 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:06.877 00:58:59 -- nvmf/common.sh@521 -- # local subsystem config 00:25:06.877 00:58:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:06.877 00:58:59 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:06.877 00:58:59 -- target/dif.sh@82 -- # gen_fio_conf 00:25:06.877 00:58:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:06.877 { 00:25:06.877 "params": { 00:25:06.877 "name": "Nvme$subsystem", 00:25:06.877 "trtype": "$TEST_TRANSPORT", 00:25:06.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:06.877 "adrfam": "ipv4", 00:25:06.877 "trsvcid": "$NVMF_PORT", 00:25:06.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:06.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:06.877 "hdgst": ${hdgst:-false}, 00:25:06.877 "ddgst": ${ddgst:-false} 00:25:06.877 }, 00:25:06.877 "method": "bdev_nvme_attach_controller" 00:25:06.877 } 00:25:06.877 EOF 00:25:06.877 )") 00:25:06.877 00:58:59 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:06.877 00:58:59 -- target/dif.sh@54 -- # local file 00:25:06.877 00:58:59 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:06.877 00:58:59 -- target/dif.sh@56 -- # cat 00:25:06.877 00:58:59 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:06.877 00:58:59 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:06.877 00:58:59 -- common/autotest_common.sh@1327 -- # shift 00:25:06.877 00:58:59 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:06.877 00:58:59 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:06.877 00:58:59 -- nvmf/common.sh@543 -- # cat 00:25:06.877 00:58:59 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:06.877 00:58:59 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:06.877 00:58:59 -- target/dif.sh@72 -- # (( file <= files )) 00:25:06.877 00:58:59 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:06.877 00:58:59 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:06.877 00:58:59 -- nvmf/common.sh@545 -- # jq . 00:25:06.877 00:58:59 -- nvmf/common.sh@546 -- # IFS=, 00:25:06.877 00:58:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:06.877 "params": { 00:25:06.877 "name": "Nvme0", 00:25:06.877 "trtype": "tcp", 00:25:06.877 "traddr": "10.0.0.2", 00:25:06.877 "adrfam": "ipv4", 00:25:06.877 "trsvcid": "4420", 00:25:06.877 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.877 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:06.877 "hdgst": false, 00:25:06.877 "ddgst": false 00:25:06.877 }, 00:25:06.877 "method": "bdev_nvme_attach_controller" 00:25:06.877 }' 00:25:06.877 00:58:59 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:06.877 00:58:59 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:06.877 00:58:59 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:06.877 00:58:59 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:06.877 00:58:59 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:06.877 00:58:59 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:06.877 00:58:59 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:06.877 00:58:59 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:06.877 00:58:59 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:06.877 00:58:59 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:07.136 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:07.136 fio-3.35 00:25:07.136 Starting 1 thread 00:25:07.136 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.339 00:25:19.339 filename0: (groupid=0, jobs=1): err= 0: pid=1831247: Sat Apr 27 00:59:10 2024 00:25:19.339 read: IOPS=181, BW=724KiB/s (742kB/s)(7248KiB/10005msec) 00:25:19.339 slat (nsec): min=5761, max=36032, avg=6349.83, stdev=1481.71 00:25:19.339 clat (usec): min=1490, max=44042, avg=22066.48, stdev=20495.06 00:25:19.339 lat (usec): min=1496, max=44070, avg=22072.83, stdev=20495.13 00:25:19.339 clat percentiles (usec): 00:25:19.339 | 1.00th=[ 1500], 5.00th=[ 1516], 10.00th=[ 1516], 20.00th=[ 1532], 00:25:19.339 | 30.00th=[ 1532], 40.00th=[ 1549], 50.00th=[41157], 60.00th=[42206], 00:25:19.339 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:25:19.339 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[44303], 00:25:19.339 | 99.99th=[44303] 00:25:19.339 bw ( KiB/s): min= 704, max= 768, per=99.94%, avg=724.21, stdev=26.58, samples=19 00:25:19.339 iops : min= 176, max= 192, avg=181.05, stdev= 6.65, samples=19 00:25:19.339 lat (msec) : 2=49.89%, 50=50.11% 00:25:19.339 cpu : usr=94.69%, sys=5.06%, ctx=17, majf=0, minf=254 00:25:19.339 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:19.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.339 issued rwts: total=1812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:19.339 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:19.339 00:25:19.339 Run status group 0 (all jobs): 00:25:19.339 READ: bw=724KiB/s (742kB/s), 724KiB/s-724KiB/s (742kB/s-742kB/s), io=7248KiB (7422kB), run=10005-10005msec 00:25:19.339 00:59:10 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:19.339 00:59:10 -- target/dif.sh@43 -- # local sub 00:25:19.339 00:59:10 -- target/dif.sh@45 -- # for sub in "$@" 00:25:19.339 00:59:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:19.339 00:59:10 -- target/dif.sh@36 -- # local sub_id=0 00:25:19.339 00:59:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:19.339 00:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 00:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.339 00:59:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:19.339 00:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 00:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.339 00:25:19.339 real 0m11.113s 00:25:19.339 user 0m16.397s 00:25:19.339 sys 0m0.844s 00:25:19.339 00:59:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 ************************************ 00:25:19.339 END TEST fio_dif_1_default 00:25:19.339 ************************************ 00:25:19.339 00:59:10 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:19.339 00:59:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:19.339 00:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 ************************************ 00:25:19.339 START TEST fio_dif_1_multi_subsystems 00:25:19.339 ************************************ 00:25:19.339 00:59:10 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:25:19.339 00:59:10 -- target/dif.sh@92 -- # local files=1 00:25:19.339 00:59:10 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:19.339 00:59:10 -- target/dif.sh@28 -- # local sub 00:25:19.339 00:59:10 -- target/dif.sh@30 -- # for sub in "$@" 00:25:19.339 00:59:10 -- target/dif.sh@31 -- # create_subsystem 0 00:25:19.339 00:59:10 -- target/dif.sh@18 -- # local sub_id=0 00:25:19.339 00:59:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:19.339 00:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 bdev_null0 00:25:19.339 00:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.339 00:59:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:19.339 00:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 00:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.339 00:59:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:19.339 00:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 00:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.339 00:59:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:19.339 00:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 [2024-04-27 00:59:10.657501] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.339 00:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.339 00:59:10 -- target/dif.sh@30 -- # for sub in "$@" 00:25:19.339 00:59:10 -- target/dif.sh@31 -- # create_subsystem 1 00:25:19.339 00:59:10 -- target/dif.sh@18 -- # local sub_id=1 00:25:19.339 00:59:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:19.339 00:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 bdev_null1 00:25:19.339 00:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.339 00:59:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:19.339 00:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 00:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.339 00:59:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:19.339 00:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.339 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.339 00:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.340 00:59:10 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.340 00:59:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.340 00:59:10 -- common/autotest_common.sh@10 -- # set +x 00:25:19.340 00:59:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.340 00:59:10 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:19.340 00:59:10 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:19.340 00:59:10 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:19.340 00:59:10 -- nvmf/common.sh@521 -- # config=() 00:25:19.340 00:59:10 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.340 00:59:10 -- target/dif.sh@82 -- # gen_fio_conf 00:25:19.340 00:59:10 -- nvmf/common.sh@521 -- # local subsystem config 00:25:19.340 00:59:10 -- target/dif.sh@54 -- # local file 00:25:19.340 00:59:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:19.340 00:59:10 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.340 00:59:10 -- target/dif.sh@56 -- # cat 00:25:19.340 00:59:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:19.340 { 00:25:19.340 "params": { 00:25:19.340 "name": "Nvme$subsystem", 00:25:19.340 "trtype": "$TEST_TRANSPORT", 00:25:19.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.340 "adrfam": "ipv4", 00:25:19.340 "trsvcid": "$NVMF_PORT", 00:25:19.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.340 "hdgst": ${hdgst:-false}, 00:25:19.340 "ddgst": ${ddgst:-false} 00:25:19.340 }, 00:25:19.340 "method": "bdev_nvme_attach_controller" 00:25:19.340 } 00:25:19.340 EOF 00:25:19.340 )") 00:25:19.340 00:59:10 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:19.340 00:59:10 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:19.340 00:59:10 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:19.340 00:59:10 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:19.340 00:59:10 -- common/autotest_common.sh@1327 -- # shift 00:25:19.340 00:59:10 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:19.340 00:59:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.340 00:59:10 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:19.340 00:59:10 -- nvmf/common.sh@543 -- # cat 00:25:19.340 00:59:10 -- target/dif.sh@72 -- # (( file <= files )) 00:25:19.340 00:59:10 -- target/dif.sh@73 -- # cat 00:25:19.340 00:59:10 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:19.340 00:59:10 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:19.340 00:59:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:19.340 00:59:10 -- target/dif.sh@72 -- # (( file++ )) 00:25:19.340 00:59:10 -- target/dif.sh@72 -- # (( file <= files )) 00:25:19.340 00:59:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:19.340 00:59:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:19.340 { 00:25:19.340 "params": { 00:25:19.340 "name": "Nvme$subsystem", 00:25:19.340 "trtype": "$TEST_TRANSPORT", 00:25:19.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:19.340 "adrfam": "ipv4", 00:25:19.340 "trsvcid": "$NVMF_PORT", 00:25:19.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:19.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:19.340 "hdgst": ${hdgst:-false}, 00:25:19.340 "ddgst": ${ddgst:-false} 00:25:19.340 }, 00:25:19.340 "method": "bdev_nvme_attach_controller" 00:25:19.340 } 00:25:19.340 EOF 00:25:19.340 )") 00:25:19.340 00:59:10 -- nvmf/common.sh@543 -- # cat 00:25:19.340 00:59:10 -- nvmf/common.sh@545 -- # jq . 00:25:19.340 00:59:10 -- nvmf/common.sh@546 -- # IFS=, 00:25:19.340 00:59:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:19.340 "params": { 00:25:19.340 "name": "Nvme0", 00:25:19.340 "trtype": "tcp", 00:25:19.340 "traddr": "10.0.0.2", 00:25:19.340 "adrfam": "ipv4", 00:25:19.340 "trsvcid": "4420", 00:25:19.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:19.340 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:19.340 "hdgst": false, 00:25:19.340 "ddgst": false 00:25:19.340 }, 00:25:19.340 "method": "bdev_nvme_attach_controller" 00:25:19.340 },{ 00:25:19.340 "params": { 00:25:19.340 "name": "Nvme1", 00:25:19.340 "trtype": "tcp", 00:25:19.340 "traddr": "10.0.0.2", 00:25:19.340 "adrfam": "ipv4", 00:25:19.340 "trsvcid": "4420", 00:25:19.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:19.340 "hdgst": false, 00:25:19.340 "ddgst": false 00:25:19.340 }, 00:25:19.340 "method": "bdev_nvme_attach_controller" 00:25:19.340 }' 00:25:19.340 00:59:10 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:19.340 00:59:10 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:19.340 00:59:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:19.340 00:59:10 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:19.340 00:59:10 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:19.340 00:59:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:19.340 00:59:10 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:19.340 00:59:10 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:19.340 00:59:10 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:19.340 00:59:10 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:19.340 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:19.340 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:19.340 fio-3.35 00:25:19.340 Starting 2 threads 00:25:19.340 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.341 00:25:29.341 filename0: (groupid=0, jobs=1): err= 0: pid=1833217: Sat Apr 27 00:59:21 2024 00:25:29.341 read: IOPS=95, BW=380KiB/s (390kB/s)(3808KiB/10008msec) 00:25:29.341 slat (nsec): min=3008, max=12713, avg=7380.30, stdev=2127.03 00:25:29.341 clat (usec): min=40982, max=47737, avg=42027.69, stdev=412.69 00:25:29.341 lat (usec): min=40988, max=47746, avg=42035.07, stdev=412.57 00:25:29.341 clat percentiles (usec): 00:25:29.341 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:25:29.341 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:25:29.341 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:29.341 | 99.00th=[43254], 99.50th=[43254], 99.90th=[47973], 99.95th=[47973], 00:25:29.341 | 99.99th=[47973] 00:25:29.341 bw ( KiB/s): min= 352, max= 384, per=49.80%, avg=379.20, stdev=11.72, samples=20 00:25:29.341 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:25:29.341 lat (msec) : 50=100.00% 00:25:29.341 cpu : usr=97.55%, sys=2.21%, ctx=13, majf=0, minf=55 00:25:29.341 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.341 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.341 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:29.341 filename1: (groupid=0, jobs=1): err= 0: pid=1833218: Sat Apr 27 00:59:21 2024 00:25:29.341 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10006msec) 00:25:29.341 slat (nsec): min=4223, max=22083, avg=7439.84, stdev=2280.48 00:25:29.341 clat (usec): min=41788, max=44189, avg=42016.58, stdev=223.76 00:25:29.341 lat (usec): min=41794, max=44201, avg=42024.02, stdev=223.77 00:25:29.341 clat percentiles (usec): 00:25:29.341 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:25:29.341 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:25:29.341 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:25:29.341 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:25:29.341 | 99.99th=[44303] 00:25:29.341 bw ( KiB/s): min= 352, max= 384, per=49.93%, avg=380.63, stdev=10.09, samples=19 00:25:29.341 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:25:29.341 lat (msec) : 50=100.00% 00:25:29.341 cpu : usr=97.62%, sys=2.13%, ctx=9, majf=0, minf=169 00:25:29.341 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.341 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.341 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:29.341 00:25:29.341 Run status group 0 (all jobs): 00:25:29.341 READ: bw=761KiB/s (779kB/s), 380KiB/s-381KiB/s (390kB/s-390kB/s), io=7616KiB (7799kB), run=10006-10008msec 00:25:29.341 00:59:21 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:29.341 00:59:21 -- target/dif.sh@43 -- # local sub 00:25:29.342 00:59:21 -- target/dif.sh@45 -- # for sub in "$@" 00:25:29.342 00:59:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:29.342 00:59:21 -- target/dif.sh@36 -- # local sub_id=0 00:25:29.342 00:59:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:29.342 00:59:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.342 00:59:21 -- common/autotest_common.sh@10 -- # set +x 00:25:29.342 00:59:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.342 00:59:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:29.342 00:59:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.342 00:59:21 -- common/autotest_common.sh@10 -- # set +x 00:25:29.342 00:59:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.342 00:59:21 -- target/dif.sh@45 -- # for sub in "$@" 00:25:29.342 00:59:21 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:29.342 00:59:21 -- target/dif.sh@36 -- # local sub_id=1 00:25:29.342 00:59:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.342 00:59:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.342 00:59:21 -- common/autotest_common.sh@10 -- # set +x 00:25:29.342 00:59:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.342 00:59:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:29.342 00:59:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.342 00:59:21 -- common/autotest_common.sh@10 -- # set +x 00:25:29.342 00:59:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.342 00:25:29.342 real 0m11.295s 00:25:29.342 user 0m26.623s 00:25:29.342 sys 0m0.698s 00:25:29.342 00:59:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:29.342 00:59:21 -- common/autotest_common.sh@10 -- # set +x 00:25:29.342 ************************************ 00:25:29.342 END TEST fio_dif_1_multi_subsystems 00:25:29.342 ************************************ 00:25:29.342 00:59:21 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:29.342 00:59:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:29.342 00:59:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:29.342 00:59:21 -- common/autotest_common.sh@10 -- # set +x 00:25:29.601 ************************************ 00:25:29.601 START TEST fio_dif_rand_params 00:25:29.601 ************************************ 00:25:29.601 00:59:22 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:25:29.601 00:59:22 -- target/dif.sh@100 -- # local NULL_DIF 00:25:29.601 00:59:22 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:29.601 00:59:22 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:29.601 00:59:22 -- target/dif.sh@103 -- # bs=128k 00:25:29.601 00:59:22 -- target/dif.sh@103 -- # numjobs=3 00:25:29.601 00:59:22 -- target/dif.sh@103 -- # iodepth=3 00:25:29.601 00:59:22 -- target/dif.sh@103 -- # runtime=5 00:25:29.601 00:59:22 -- target/dif.sh@105 -- # create_subsystems 0 00:25:29.601 00:59:22 -- target/dif.sh@28 -- # local sub 00:25:29.601 00:59:22 -- target/dif.sh@30 -- # for sub in "$@" 00:25:29.601 00:59:22 -- target/dif.sh@31 -- # create_subsystem 0 00:25:29.601 00:59:22 -- target/dif.sh@18 -- # local sub_id=0 00:25:29.601 00:59:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:29.601 00:59:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.601 00:59:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.601 bdev_null0 00:25:29.601 00:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.601 00:59:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:29.601 00:59:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.601 00:59:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.601 00:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.601 00:59:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:29.601 00:59:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.601 00:59:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.601 00:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.601 00:59:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:29.601 00:59:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:29.601 00:59:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.601 [2024-04-27 00:59:22.114677] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.601 00:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:29.601 00:59:22 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:29.601 00:59:22 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:29.601 00:59:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:29.601 00:59:22 -- nvmf/common.sh@521 -- # config=() 00:25:29.601 00:59:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.601 00:59:22 -- nvmf/common.sh@521 -- # local subsystem config 00:25:29.601 00:59:22 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.601 00:59:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:29.601 00:59:22 -- target/dif.sh@82 -- # gen_fio_conf 00:25:29.601 00:59:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:29.601 { 00:25:29.601 "params": { 00:25:29.601 "name": "Nvme$subsystem", 00:25:29.601 "trtype": "$TEST_TRANSPORT", 00:25:29.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:29.601 "adrfam": "ipv4", 00:25:29.601 "trsvcid": "$NVMF_PORT", 00:25:29.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:29.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:29.601 "hdgst": ${hdgst:-false}, 00:25:29.601 "ddgst": ${ddgst:-false} 00:25:29.601 }, 00:25:29.601 "method": "bdev_nvme_attach_controller" 00:25:29.601 } 00:25:29.601 EOF 00:25:29.601 )") 00:25:29.601 00:59:22 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:29.601 00:59:22 -- target/dif.sh@54 -- # local file 00:25:29.601 00:59:22 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:29.601 00:59:22 -- target/dif.sh@56 -- # cat 00:25:29.601 00:59:22 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:29.601 00:59:22 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:29.601 00:59:22 -- common/autotest_common.sh@1327 -- # shift 00:25:29.601 00:59:22 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:29.601 00:59:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.601 00:59:22 -- nvmf/common.sh@543 -- # cat 00:25:29.601 00:59:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:29.601 00:59:22 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:29.601 00:59:22 -- target/dif.sh@72 -- # (( file <= files )) 00:25:29.601 00:59:22 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:29.601 00:59:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:29.601 00:59:22 -- nvmf/common.sh@545 -- # jq . 00:25:29.601 00:59:22 -- nvmf/common.sh@546 -- # IFS=, 00:25:29.601 00:59:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:29.601 "params": { 00:25:29.601 "name": "Nvme0", 00:25:29.601 "trtype": "tcp", 00:25:29.601 "traddr": "10.0.0.2", 00:25:29.601 "adrfam": "ipv4", 00:25:29.601 "trsvcid": "4420", 00:25:29.601 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:29.601 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:29.601 "hdgst": false, 00:25:29.601 "ddgst": false 00:25:29.601 }, 00:25:29.601 "method": "bdev_nvme_attach_controller" 00:25:29.601 }' 00:25:29.601 00:59:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:29.601 00:59:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:29.601 00:59:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.601 00:59:22 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:29.601 00:59:22 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:29.601 00:59:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:29.601 00:59:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:29.601 00:59:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:29.601 00:59:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:29.601 00:59:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:29.860 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:29.860 ... 00:25:29.860 fio-3.35 00:25:29.860 Starting 3 threads 00:25:29.860 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.452 00:25:36.452 filename0: (groupid=0, jobs=1): err= 0: pid=1835196: Sat Apr 27 00:59:28 2024 00:25:36.452 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(151MiB/5007msec) 00:25:36.453 slat (nsec): min=4308, max=16428, avg=8590.50, stdev=2291.60 00:25:36.453 clat (usec): min=4811, max=93095, avg=12396.01, stdev=13173.20 00:25:36.453 lat (usec): min=4818, max=93107, avg=12404.60, stdev=13173.37 00:25:36.453 clat percentiles (usec): 00:25:36.453 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 5669], 20.00th=[ 6128], 00:25:36.453 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7963], 60.00th=[ 8717], 00:25:36.453 | 70.00th=[ 9765], 80.00th=[11076], 90.00th=[21365], 95.00th=[50070], 00:25:36.453 | 99.00th=[54264], 99.50th=[54789], 99.90th=[92799], 99.95th=[92799], 00:25:36.453 | 99.99th=[92799] 00:25:36.453 bw ( KiB/s): min=19712, max=37376, per=38.94%, avg=30899.20, stdev=5606.15, samples=10 00:25:36.453 iops : min= 154, max= 292, avg=241.40, stdev=43.80, samples=10 00:25:36.453 lat (msec) : 10=71.74%, 20=18.02%, 50=5.29%, 100=4.96% 00:25:36.453 cpu : usr=94.67%, sys=4.67%, ctx=9, majf=0, minf=63 00:25:36.453 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:36.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.453 issued rwts: total=1210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.453 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:36.453 filename0: (groupid=0, jobs=1): err= 0: pid=1835197: Sat Apr 27 00:59:28 2024 00:25:36.453 read: IOPS=156, BW=19.5MiB/s (20.5MB/s)(97.6MiB/5004msec) 00:25:36.453 slat (nsec): min=6108, max=23985, avg=8882.40, stdev=2574.91 00:25:36.453 clat (usec): min=4858, max=97776, avg=19200.17, stdev=18958.46 00:25:36.453 lat (usec): min=4867, max=97788, avg=19209.05, stdev=18958.67 00:25:36.453 clat percentiles (usec): 00:25:36.453 | 1.00th=[ 5276], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 7373], 00:25:36.453 | 30.00th=[ 7963], 40.00th=[ 8848], 50.00th=[10028], 60.00th=[11469], 00:25:36.453 | 70.00th=[13042], 80.00th=[49021], 90.00th=[53740], 95.00th=[54789], 00:25:36.453 | 99.00th=[58983], 99.50th=[94897], 99.90th=[98042], 99.95th=[98042], 00:25:36.453 | 99.99th=[98042] 00:25:36.453 bw ( KiB/s): min=13312, max=31744, per=25.10%, avg=19916.80, stdev=5495.63, samples=10 00:25:36.453 iops : min= 104, max= 248, avg=155.60, stdev=42.93, samples=10 00:25:36.453 lat (msec) : 10=49.68%, 20=28.17%, 50=3.71%, 100=18.44% 00:25:36.453 cpu : usr=96.34%, sys=3.26%, ctx=6, majf=0, minf=116 00:25:36.453 IO depths : 1=3.7%, 2=96.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:36.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.453 issued rwts: total=781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.453 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:36.453 filename0: (groupid=0, jobs=1): err= 0: pid=1835198: Sat Apr 27 00:59:28 2024 00:25:36.453 read: IOPS=222, BW=27.8MiB/s (29.1MB/s)(139MiB/5005msec) 00:25:36.453 slat (nsec): min=6108, max=24401, avg=8540.18, stdev=2587.60 00:25:36.453 clat (usec): min=4462, max=56978, avg=13475.79, stdev=13754.95 00:25:36.453 lat (usec): min=4469, max=56985, avg=13484.33, stdev=13755.09 00:25:36.453 clat percentiles (usec): 00:25:36.453 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6783], 00:25:36.453 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8291], 60.00th=[ 9110], 00:25:36.453 | 70.00th=[ 9896], 80.00th=[11863], 90.00th=[48497], 95.00th=[50594], 00:25:36.453 | 99.00th=[53216], 99.50th=[54264], 99.90th=[56886], 99.95th=[56886], 00:25:36.453 | 99.99th=[56886] 00:25:36.453 bw ( KiB/s): min=19968, max=38144, per=35.82%, avg=28422.40, stdev=5707.49, samples=10 00:25:36.453 iops : min= 156, max= 298, avg=222.00, stdev=44.55, samples=10 00:25:36.453 lat (msec) : 10=70.44%, 20=17.70%, 50=5.30%, 100=6.56% 00:25:36.453 cpu : usr=94.92%, sys=4.38%, ctx=7, majf=0, minf=81 00:25:36.453 IO depths : 1=4.3%, 2=95.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:36.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.453 issued rwts: total=1113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.453 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:36.453 00:25:36.453 Run status group 0 (all jobs): 00:25:36.453 READ: bw=77.5MiB/s (81.3MB/s), 19.5MiB/s-30.2MiB/s (20.5MB/s-31.7MB/s), io=388MiB (407MB), run=5004-5007msec 00:25:36.453 00:59:28 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:36.453 00:59:28 -- target/dif.sh@43 -- # local sub 00:25:36.453 00:59:28 -- target/dif.sh@45 -- # for sub in "$@" 00:25:36.453 00:59:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:36.453 00:59:28 -- target/dif.sh@36 -- # local sub_id=0 00:25:36.453 00:59:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:36.453 00:59:28 -- target/dif.sh@109 -- # bs=4k 00:25:36.453 00:59:28 -- target/dif.sh@109 -- # numjobs=8 00:25:36.453 00:59:28 -- target/dif.sh@109 -- # iodepth=16 00:25:36.453 00:59:28 -- target/dif.sh@109 -- # runtime= 00:25:36.453 00:59:28 -- target/dif.sh@109 -- # files=2 00:25:36.453 00:59:28 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:36.453 00:59:28 -- target/dif.sh@28 -- # local sub 00:25:36.453 00:59:28 -- target/dif.sh@30 -- # for sub in "$@" 00:25:36.453 00:59:28 -- target/dif.sh@31 -- # create_subsystem 0 00:25:36.453 00:59:28 -- target/dif.sh@18 -- # local sub_id=0 00:25:36.453 00:59:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 bdev_null0 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 [2024-04-27 00:59:28.265517] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@30 -- # for sub in "$@" 00:25:36.453 00:59:28 -- target/dif.sh@31 -- # create_subsystem 1 00:25:36.453 00:59:28 -- target/dif.sh@18 -- # local sub_id=1 00:25:36.453 00:59:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 bdev_null1 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@30 -- # for sub in "$@" 00:25:36.453 00:59:28 -- target/dif.sh@31 -- # create_subsystem 2 00:25:36.453 00:59:28 -- target/dif.sh@18 -- # local sub_id=2 00:25:36.453 00:59:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 bdev_null2 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:36.453 00:59:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.453 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:36.453 00:59:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.453 00:59:28 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:36.453 00:59:28 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:36.453 00:59:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:36.453 00:59:28 -- nvmf/common.sh@521 -- # config=() 00:25:36.453 00:59:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:36.453 00:59:28 -- nvmf/common.sh@521 -- # local subsystem config 00:25:36.453 00:59:28 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:36.453 00:59:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:36.453 00:59:28 -- target/dif.sh@82 -- # gen_fio_conf 00:25:36.453 00:59:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:36.453 { 00:25:36.453 "params": { 00:25:36.453 "name": "Nvme$subsystem", 00:25:36.454 "trtype": "$TEST_TRANSPORT", 00:25:36.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.454 "adrfam": "ipv4", 00:25:36.454 "trsvcid": "$NVMF_PORT", 00:25:36.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.454 "hdgst": ${hdgst:-false}, 00:25:36.454 "ddgst": ${ddgst:-false} 00:25:36.454 }, 00:25:36.454 "method": "bdev_nvme_attach_controller" 00:25:36.454 } 00:25:36.454 EOF 00:25:36.454 )") 00:25:36.454 00:59:28 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:36.454 00:59:28 -- target/dif.sh@54 -- # local file 00:25:36.454 00:59:28 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:36.454 00:59:28 -- target/dif.sh@56 -- # cat 00:25:36.454 00:59:28 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:36.454 00:59:28 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:36.454 00:59:28 -- common/autotest_common.sh@1327 -- # shift 00:25:36.454 00:59:28 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:36.454 00:59:28 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.454 00:59:28 -- nvmf/common.sh@543 -- # cat 00:25:36.454 00:59:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:36.454 00:59:28 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:36.454 00:59:28 -- target/dif.sh@72 -- # (( file <= files )) 00:25:36.454 00:59:28 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:36.454 00:59:28 -- target/dif.sh@73 -- # cat 00:25:36.454 00:59:28 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:36.454 00:59:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:36.454 00:59:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:36.454 { 00:25:36.454 "params": { 00:25:36.454 "name": "Nvme$subsystem", 00:25:36.454 "trtype": "$TEST_TRANSPORT", 00:25:36.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.454 "adrfam": "ipv4", 00:25:36.454 "trsvcid": "$NVMF_PORT", 00:25:36.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.454 "hdgst": ${hdgst:-false}, 00:25:36.454 "ddgst": ${ddgst:-false} 00:25:36.454 }, 00:25:36.454 "method": "bdev_nvme_attach_controller" 00:25:36.454 } 00:25:36.454 EOF 00:25:36.454 )") 00:25:36.454 00:59:28 -- target/dif.sh@72 -- # (( file++ )) 00:25:36.454 00:59:28 -- target/dif.sh@72 -- # (( file <= files )) 00:25:36.454 00:59:28 -- nvmf/common.sh@543 -- # cat 00:25:36.454 00:59:28 -- target/dif.sh@73 -- # cat 00:25:36.454 00:59:28 -- target/dif.sh@72 -- # (( file++ )) 00:25:36.454 00:59:28 -- target/dif.sh@72 -- # (( file <= files )) 00:25:36.454 00:59:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:36.454 00:59:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:36.454 { 00:25:36.454 "params": { 00:25:36.454 "name": "Nvme$subsystem", 00:25:36.454 "trtype": "$TEST_TRANSPORT", 00:25:36.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:36.454 "adrfam": "ipv4", 00:25:36.454 "trsvcid": "$NVMF_PORT", 00:25:36.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:36.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:36.454 "hdgst": ${hdgst:-false}, 00:25:36.454 "ddgst": ${ddgst:-false} 00:25:36.454 }, 00:25:36.454 "method": "bdev_nvme_attach_controller" 00:25:36.454 } 00:25:36.454 EOF 00:25:36.454 )") 00:25:36.454 00:59:28 -- nvmf/common.sh@543 -- # cat 00:25:36.454 00:59:28 -- nvmf/common.sh@545 -- # jq . 00:25:36.454 00:59:28 -- nvmf/common.sh@546 -- # IFS=, 00:25:36.454 00:59:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:36.454 "params": { 00:25:36.454 "name": "Nvme0", 00:25:36.454 "trtype": "tcp", 00:25:36.454 "traddr": "10.0.0.2", 00:25:36.454 "adrfam": "ipv4", 00:25:36.454 "trsvcid": "4420", 00:25:36.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:36.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:36.454 "hdgst": false, 00:25:36.454 "ddgst": false 00:25:36.454 }, 00:25:36.454 "method": "bdev_nvme_attach_controller" 00:25:36.454 },{ 00:25:36.454 "params": { 00:25:36.454 "name": "Nvme1", 00:25:36.454 "trtype": "tcp", 00:25:36.454 "traddr": "10.0.0.2", 00:25:36.454 "adrfam": "ipv4", 00:25:36.454 "trsvcid": "4420", 00:25:36.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:36.454 "hdgst": false, 00:25:36.454 "ddgst": false 00:25:36.454 }, 00:25:36.454 "method": "bdev_nvme_attach_controller" 00:25:36.454 },{ 00:25:36.454 "params": { 00:25:36.454 "name": "Nvme2", 00:25:36.454 "trtype": "tcp", 00:25:36.454 "traddr": "10.0.0.2", 00:25:36.454 "adrfam": "ipv4", 00:25:36.454 "trsvcid": "4420", 00:25:36.454 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:36.454 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:36.454 "hdgst": false, 00:25:36.454 "ddgst": false 00:25:36.454 }, 00:25:36.454 "method": "bdev_nvme_attach_controller" 00:25:36.454 }' 00:25:36.454 00:59:28 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:36.454 00:59:28 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:36.454 00:59:28 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.454 00:59:28 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:36.454 00:59:28 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:36.454 00:59:28 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:36.454 00:59:28 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:36.454 00:59:28 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:36.454 00:59:28 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:36.454 00:59:28 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:36.454 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:36.454 ... 00:25:36.454 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:36.454 ... 00:25:36.454 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:36.454 ... 00:25:36.454 fio-3.35 00:25:36.454 Starting 24 threads 00:25:36.454 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.656 00:25:48.656 filename0: (groupid=0, jobs=1): err= 0: pid=1836248: Sat Apr 27 00:59:39 2024 00:25:48.656 read: IOPS=72, BW=289KiB/s (296kB/s)(2920KiB/10091msec) 00:25:48.656 slat (nsec): min=3137, max=39389, avg=8404.43, stdev=3049.56 00:25:48.656 clat (msec): min=5, max=409, avg=219.98, stdev=84.13 00:25:48.656 lat (msec): min=5, max=409, avg=219.99, stdev=84.13 00:25:48.656 clat percentiles (msec): 00:25:48.656 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 57], 20.00th=[ 186], 00:25:48.656 | 30.00th=[ 213], 40.00th=[ 234], 50.00th=[ 249], 60.00th=[ 255], 00:25:48.656 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 317], 00:25:48.656 | 99.00th=[ 401], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:25:48.656 | 99.99th=[ 409] 00:25:48.656 bw ( KiB/s): min= 175, max= 896, per=5.09%, avg=289.05, stdev=147.81, samples=20 00:25:48.656 iops : min= 43, max= 224, avg=71.85, stdev=37.08, samples=20 00:25:48.656 lat (msec) : 10=6.58%, 20=2.19%, 100=2.19%, 250=41.37%, 500=47.67% 00:25:48.656 cpu : usr=99.11%, sys=0.55%, ctx=14, majf=0, minf=82 00:25:48.656 IO depths : 1=0.1%, 2=0.3%, 4=6.3%, 8=80.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:25:48.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 complete : 0=0.0%, 4=88.7%, 8=6.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 issued rwts: total=730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.656 filename0: (groupid=0, jobs=1): err= 0: pid=1836249: Sat Apr 27 00:59:39 2024 00:25:48.656 read: IOPS=70, BW=283KiB/s (290kB/s)(2856KiB/10089msec) 00:25:48.656 slat (nsec): min=4327, max=41112, avg=10848.15, stdev=4594.29 00:25:48.656 clat (msec): min=7, max=450, avg=225.97, stdev=69.53 00:25:48.656 lat (msec): min=7, max=450, avg=225.98, stdev=69.53 00:25:48.656 clat percentiles (msec): 00:25:48.656 | 1.00th=[ 8], 5.00th=[ 21], 10.00th=[ 142], 20.00th=[ 201], 00:25:48.656 | 30.00th=[ 218], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 255], 00:25:48.656 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 284], 00:25:48.656 | 99.00th=[ 372], 99.50th=[ 401], 99.90th=[ 451], 99.95th=[ 451], 00:25:48.656 | 99.99th=[ 451] 00:25:48.656 bw ( KiB/s): min= 127, max= 640, per=4.90%, avg=278.70, stdev=98.22, samples=20 00:25:48.656 iops : min= 31, max= 160, avg=69.30, stdev=24.72, samples=20 00:25:48.656 lat (msec) : 10=3.64%, 20=2.10%, 50=0.98%, 250=45.66%, 500=47.62% 00:25:48.656 cpu : usr=98.65%, sys=0.95%, ctx=18, majf=0, minf=38 00:25:48.656 IO depths : 1=4.8%, 2=10.9%, 4=24.2%, 8=52.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:25:48.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 issued rwts: total=714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.656 filename0: (groupid=0, jobs=1): err= 0: pid=1836251: Sat Apr 27 00:59:39 2024 00:25:48.656 read: IOPS=57, BW=228KiB/s (234kB/s)(2296KiB/10054msec) 00:25:48.656 slat (nsec): min=6260, max=93652, avg=21413.37, stdev=22828.47 00:25:48.656 clat (msec): min=131, max=447, avg=280.02, stdev=51.92 00:25:48.656 lat (msec): min=131, max=447, avg=280.04, stdev=51.93 00:25:48.656 clat percentiles (msec): 00:25:48.656 | 1.00th=[ 167], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 253], 00:25:48.656 | 30.00th=[ 257], 40.00th=[ 264], 50.00th=[ 266], 60.00th=[ 271], 00:25:48.656 | 70.00th=[ 279], 80.00th=[ 296], 90.00th=[ 363], 95.00th=[ 409], 00:25:48.656 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 447], 99.95th=[ 447], 00:25:48.656 | 99.99th=[ 447] 00:25:48.656 bw ( KiB/s): min= 127, max= 256, per=3.91%, avg=222.75, stdev=51.21, samples=20 00:25:48.656 iops : min= 31, max= 64, avg=55.35, stdev=12.78, samples=20 00:25:48.656 lat (msec) : 250=13.59%, 500=86.41% 00:25:48.656 cpu : usr=99.04%, sys=0.60%, ctx=14, majf=0, minf=56 00:25:48.656 IO depths : 1=3.7%, 2=9.4%, 4=23.5%, 8=54.7%, 16=8.7%, 32=0.0%, >=64=0.0% 00:25:48.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.656 filename0: (groupid=0, jobs=1): err= 0: pid=1836252: Sat Apr 27 00:59:39 2024 00:25:48.656 read: IOPS=44, BW=179KiB/s (183kB/s)(1792KiB/10030msec) 00:25:48.656 slat (nsec): min=6182, max=41370, avg=9623.23, stdev=4926.86 00:25:48.656 clat (msec): min=161, max=540, avg=358.12, stdev=83.69 00:25:48.656 lat (msec): min=161, max=540, avg=358.13, stdev=83.69 00:25:48.656 clat percentiles (msec): 00:25:48.656 | 1.00th=[ 161], 5.00th=[ 197], 10.00th=[ 262], 20.00th=[ 275], 00:25:48.656 | 30.00th=[ 321], 40.00th=[ 347], 50.00th=[ 368], 60.00th=[ 397], 00:25:48.656 | 70.00th=[ 422], 80.00th=[ 439], 90.00th=[ 439], 95.00th=[ 447], 00:25:48.656 | 99.00th=[ 527], 99.50th=[ 535], 99.90th=[ 542], 99.95th=[ 542], 00:25:48.656 | 99.99th=[ 542] 00:25:48.656 bw ( KiB/s): min= 127, max= 384, per=3.03%, avg=172.30, stdev=71.26, samples=20 00:25:48.656 iops : min= 31, max= 96, avg=42.70, stdev=17.79, samples=20 00:25:48.656 lat (msec) : 250=8.48%, 500=87.50%, 750=4.02% 00:25:48.656 cpu : usr=99.07%, sys=0.58%, ctx=8, majf=0, minf=39 00:25:48.656 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:25:48.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 issued rwts: total=448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.656 filename0: (groupid=0, jobs=1): err= 0: pid=1836253: Sat Apr 27 00:59:39 2024 00:25:48.656 read: IOPS=64, BW=257KiB/s (263kB/s)(2584KiB/10054msec) 00:25:48.656 slat (nsec): min=6294, max=36482, avg=10651.50, stdev=4470.94 00:25:48.656 clat (msec): min=161, max=462, avg=248.92, stdev=40.71 00:25:48.656 lat (msec): min=161, max=462, avg=248.93, stdev=40.71 00:25:48.656 clat percentiles (msec): 00:25:48.656 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 201], 20.00th=[ 218], 00:25:48.656 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 259], 00:25:48.656 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 284], 00:25:48.656 | 99.00th=[ 414], 99.50th=[ 439], 99.90th=[ 464], 99.95th=[ 464], 00:25:48.656 | 99.99th=[ 464] 00:25:48.656 bw ( KiB/s): min= 127, max= 368, per=4.42%, avg=251.55, stdev=40.00, samples=20 00:25:48.656 iops : min= 31, max= 92, avg=62.55, stdev=10.13, samples=20 00:25:48.656 lat (msec) : 250=43.65%, 500=56.35% 00:25:48.656 cpu : usr=98.86%, sys=0.78%, ctx=23, majf=0, minf=44 00:25:48.656 IO depths : 1=2.8%, 2=9.1%, 4=25.4%, 8=53.6%, 16=9.1%, 32=0.0%, >=64=0.0% 00:25:48.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.656 filename0: (groupid=0, jobs=1): err= 0: pid=1836254: Sat Apr 27 00:59:39 2024 00:25:48.656 read: IOPS=64, BW=256KiB/s (262kB/s)(2576KiB/10054msec) 00:25:48.656 slat (nsec): min=6379, max=32497, avg=11150.09, stdev=4303.06 00:25:48.656 clat (msec): min=166, max=375, avg=248.93, stdev=35.01 00:25:48.656 lat (msec): min=166, max=375, avg=248.94, stdev=35.01 00:25:48.656 clat percentiles (msec): 00:25:48.656 | 1.00th=[ 167], 5.00th=[ 186], 10.00th=[ 201], 20.00th=[ 224], 00:25:48.656 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 259], 00:25:48.656 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 288], 00:25:48.656 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:25:48.656 | 99.99th=[ 376] 00:25:48.656 bw ( KiB/s): min= 143, max= 368, per=4.48%, avg=254.75, stdev=37.40, samples=20 00:25:48.656 iops : min= 35, max= 92, avg=63.35, stdev= 9.47, samples=20 00:25:48.656 lat (msec) : 250=42.86%, 500=57.14% 00:25:48.656 cpu : usr=99.07%, sys=0.57%, ctx=13, majf=0, minf=42 00:25:48.656 IO depths : 1=0.8%, 2=6.7%, 4=23.9%, 8=57.0%, 16=11.6%, 32=0.0%, >=64=0.0% 00:25:48.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 complete : 0=0.0%, 4=93.9%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 issued rwts: total=644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.656 filename0: (groupid=0, jobs=1): err= 0: pid=1836255: Sat Apr 27 00:59:39 2024 00:25:48.656 read: IOPS=54, BW=218KiB/s (223kB/s)(2184KiB/10034msec) 00:25:48.656 slat (nsec): min=6214, max=43120, avg=11058.90, stdev=5856.61 00:25:48.656 clat (msec): min=185, max=502, avg=293.84, stdev=65.49 00:25:48.656 lat (msec): min=185, max=502, avg=293.85, stdev=65.49 00:25:48.656 clat percentiles (msec): 00:25:48.656 | 1.00th=[ 186], 5.00th=[ 211], 10.00th=[ 243], 20.00th=[ 259], 00:25:48.656 | 30.00th=[ 264], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 279], 00:25:48.656 | 70.00th=[ 288], 80.00th=[ 347], 90.00th=[ 418], 95.00th=[ 430], 00:25:48.656 | 99.00th=[ 451], 99.50th=[ 502], 99.90th=[ 502], 99.95th=[ 502], 00:25:48.656 | 99.99th=[ 502] 00:25:48.656 bw ( KiB/s): min= 127, max= 256, per=3.72%, avg=211.60, stdev=57.47, samples=20 00:25:48.656 iops : min= 31, max= 64, avg=52.60, stdev=14.32, samples=20 00:25:48.656 lat (msec) : 250=13.92%, 500=85.35%, 750=0.73% 00:25:48.656 cpu : usr=99.01%, sys=0.64%, ctx=25, majf=0, minf=41 00:25:48.656 IO depths : 1=2.7%, 2=7.7%, 4=20.3%, 8=59.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:25:48.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 complete : 0=0.0%, 4=93.0%, 8=2.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.656 issued rwts: total=546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.656 filename0: (groupid=0, jobs=1): err= 0: pid=1836256: Sat Apr 27 00:59:39 2024 00:25:48.657 read: IOPS=63, BW=255KiB/s (261kB/s)(2560KiB/10038msec) 00:25:48.657 slat (nsec): min=6157, max=47812, avg=10300.52, stdev=4246.68 00:25:48.657 clat (msec): min=61, max=436, avg=250.85, stdev=45.64 00:25:48.657 lat (msec): min=61, max=436, avg=250.86, stdev=45.64 00:25:48.657 clat percentiles (msec): 00:25:48.657 | 1.00th=[ 144], 5.00th=[ 161], 10.00th=[ 194], 20.00th=[ 220], 00:25:48.657 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 264], 00:25:48.657 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 284], 95.00th=[ 334], 00:25:48.657 | 99.00th=[ 380], 99.50th=[ 414], 99.90th=[ 435], 99.95th=[ 435], 00:25:48.657 | 99.99th=[ 435] 00:25:48.657 bw ( KiB/s): min= 127, max= 384, per=4.39%, avg=249.20, stdev=54.39, samples=20 00:25:48.657 iops : min= 31, max= 96, avg=62.00, stdev=13.67, samples=20 00:25:48.657 lat (msec) : 100=0.31%, 250=42.50%, 500=57.19% 00:25:48.657 cpu : usr=98.88%, sys=0.76%, ctx=10, majf=0, minf=40 00:25:48.657 IO depths : 1=2.5%, 2=7.8%, 4=22.2%, 8=57.8%, 16=9.7%, 32=0.0%, >=64=0.0% 00:25:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.657 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.657 filename1: (groupid=0, jobs=1): err= 0: pid=1836257: Sat Apr 27 00:59:39 2024 00:25:48.657 read: IOPS=61, BW=245KiB/s (251kB/s)(2456KiB/10037msec) 00:25:48.657 slat (nsec): min=6425, max=43799, avg=10640.26, stdev=5316.53 00:25:48.657 clat (msec): min=160, max=441, avg=261.45, stdev=53.97 00:25:48.657 lat (msec): min=160, max=441, avg=261.46, stdev=53.97 00:25:48.657 clat percentiles (msec): 00:25:48.657 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 201], 20.00th=[ 224], 00:25:48.657 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 266], 00:25:48.657 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 351], 95.00th=[ 384], 00:25:48.657 | 99.00th=[ 418], 99.50th=[ 430], 99.90th=[ 443], 99.95th=[ 443], 00:25:48.657 | 99.99th=[ 443] 00:25:48.657 bw ( KiB/s): min= 128, max= 368, per=4.20%, avg=238.80, stdev=49.42, samples=20 00:25:48.657 iops : min= 32, max= 92, avg=59.40, stdev=12.40, samples=20 00:25:48.657 lat (msec) : 250=39.09%, 500=60.91% 00:25:48.657 cpu : usr=99.07%, sys=0.58%, ctx=38, majf=0, minf=39 00:25:48.657 IO depths : 1=2.8%, 2=9.0%, 4=24.9%, 8=54.2%, 16=9.1%, 32=0.0%, >=64=0.0% 00:25:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.657 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.657 filename1: (groupid=0, jobs=1): err= 0: pid=1836258: Sat Apr 27 00:59:39 2024 00:25:48.657 read: IOPS=43, BW=172KiB/s (177kB/s)(1728KiB/10025msec) 00:25:48.657 slat (nsec): min=6361, max=52296, avg=13249.18, stdev=8811.55 00:25:48.657 clat (msec): min=173, max=542, avg=371.19, stdev=72.04 00:25:48.657 lat (msec): min=173, max=543, avg=371.20, stdev=72.05 00:25:48.657 clat percentiles (msec): 00:25:48.657 | 1.00th=[ 226], 5.00th=[ 247], 10.00th=[ 268], 20.00th=[ 296], 00:25:48.657 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 380], 60.00th=[ 401], 00:25:48.657 | 70.00th=[ 426], 80.00th=[ 439], 90.00th=[ 443], 95.00th=[ 447], 00:25:48.657 | 99.00th=[ 531], 99.50th=[ 535], 99.90th=[ 542], 99.95th=[ 542], 00:25:48.657 | 99.99th=[ 542] 00:25:48.657 bw ( KiB/s): min= 127, max= 256, per=2.91%, avg=165.90, stdev=55.37, samples=20 00:25:48.657 iops : min= 31, max= 64, avg=41.10, stdev=13.92, samples=20 00:25:48.657 lat (msec) : 250=5.09%, 500=91.67%, 750=3.24% 00:25:48.657 cpu : usr=99.16%, sys=0.48%, ctx=12, majf=0, minf=42 00:25:48.657 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:25:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.657 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.657 filename1: (groupid=0, jobs=1): err= 0: pid=1836259: Sat Apr 27 00:59:39 2024 00:25:48.657 read: IOPS=60, BW=241KiB/s (246kB/s)(2424KiB/10072msec) 00:25:48.657 slat (nsec): min=6440, max=34734, avg=10725.36, stdev=5038.84 00:25:48.657 clat (msec): min=166, max=504, avg=265.12, stdev=59.81 00:25:48.657 lat (msec): min=166, max=504, avg=265.13, stdev=59.81 00:25:48.657 clat percentiles (msec): 00:25:48.657 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 199], 20.00th=[ 213], 00:25:48.657 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 266], 00:25:48.657 | 70.00th=[ 275], 80.00th=[ 284], 90.00th=[ 347], 95.00th=[ 418], 00:25:48.657 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 506], 99.95th=[ 506], 00:25:48.657 | 99.99th=[ 506] 00:25:48.657 bw ( KiB/s): min= 127, max= 368, per=4.14%, avg=235.55, stdev=56.64, samples=20 00:25:48.657 iops : min= 31, max= 92, avg=58.55, stdev=14.20, samples=20 00:25:48.657 lat (msec) : 250=37.62%, 500=62.05%, 750=0.33% 00:25:48.657 cpu : usr=99.08%, sys=0.56%, ctx=15, majf=0, minf=43 00:25:48.657 IO depths : 1=3.1%, 2=9.1%, 4=24.1%, 8=54.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:25:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.657 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.657 filename1: (groupid=0, jobs=1): err= 0: pid=1836260: Sat Apr 27 00:59:39 2024 00:25:48.657 read: IOPS=68, BW=273KiB/s (280kB/s)(2764KiB/10125msec) 00:25:48.657 slat (nsec): min=6260, max=32126, avg=9147.81, stdev=3370.47 00:25:48.657 clat (msec): min=10, max=483, avg=233.51, stdev=77.16 00:25:48.657 lat (msec): min=10, max=483, avg=233.52, stdev=77.16 00:25:48.657 clat percentiles (msec): 00:25:48.657 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 132], 20.00th=[ 205], 00:25:48.657 | 30.00th=[ 218], 40.00th=[ 241], 50.00th=[ 257], 60.00th=[ 262], 00:25:48.657 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 338], 00:25:48.657 | 99.00th=[ 472], 99.50th=[ 472], 99.90th=[ 485], 99.95th=[ 485], 00:25:48.657 | 99.99th=[ 485] 00:25:48.657 bw ( KiB/s): min= 127, max= 641, per=4.74%, avg=269.55, stdev=103.72, samples=20 00:25:48.657 iops : min= 31, max= 160, avg=67.00, stdev=26.01, samples=20 00:25:48.657 lat (msec) : 20=4.63%, 50=0.87%, 100=1.45%, 250=36.90%, 500=56.15% 00:25:48.657 cpu : usr=99.00%, sys=0.64%, ctx=10, majf=0, minf=39 00:25:48.657 IO depths : 1=2.2%, 2=7.2%, 4=21.3%, 8=59.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:25:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 issued rwts: total=691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.657 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.657 filename1: (groupid=0, jobs=1): err= 0: pid=1836261: Sat Apr 27 00:59:39 2024 00:25:48.657 read: IOPS=62, BW=251KiB/s (257kB/s)(2520KiB/10054msec) 00:25:48.657 slat (nsec): min=6367, max=36612, avg=11363.06, stdev=5057.64 00:25:48.657 clat (msec): min=161, max=438, avg=255.24, stdev=41.22 00:25:48.657 lat (msec): min=161, max=438, avg=255.25, stdev=41.22 00:25:48.657 clat percentiles (msec): 00:25:48.657 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 201], 20.00th=[ 224], 00:25:48.657 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 259], 60.00th=[ 268], 00:25:48.657 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 284], 95.00th=[ 305], 00:25:48.657 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 439], 99.95th=[ 439], 00:25:48.657 | 99.99th=[ 439] 00:25:48.657 bw ( KiB/s): min= 127, max= 256, per=4.32%, avg=245.15, stdev=31.33, samples=20 00:25:48.657 iops : min= 31, max= 64, avg=60.95, stdev= 7.89, samples=20 00:25:48.657 lat (msec) : 250=37.14%, 500=62.86% 00:25:48.657 cpu : usr=99.06%, sys=0.60%, ctx=11, majf=0, minf=41 00:25:48.657 IO depths : 1=5.2%, 2=11.1%, 4=23.7%, 8=52.7%, 16=7.3%, 32=0.0%, >=64=0.0% 00:25:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 issued rwts: total=630,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.657 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.657 filename1: (groupid=0, jobs=1): err= 0: pid=1836262: Sat Apr 27 00:59:39 2024 00:25:48.657 read: IOPS=65, BW=260KiB/s (266kB/s)(2616KiB/10054msec) 00:25:48.657 slat (nsec): min=6236, max=30657, avg=9439.84, stdev=3700.14 00:25:48.657 clat (msec): min=161, max=390, avg=245.74, stdev=33.00 00:25:48.657 lat (msec): min=161, max=390, avg=245.75, stdev=33.00 00:25:48.657 clat percentiles (msec): 00:25:48.657 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 201], 20.00th=[ 218], 00:25:48.657 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 259], 00:25:48.657 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 284], 00:25:48.657 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 393], 99.95th=[ 393], 00:25:48.657 | 99.99th=[ 393] 00:25:48.657 bw ( KiB/s): min= 127, max= 384, per=4.49%, avg=255.55, stdev=41.69, samples=20 00:25:48.657 iops : min= 31, max= 96, avg=63.55, stdev=10.56, samples=20 00:25:48.657 lat (msec) : 250=43.73%, 500=56.27% 00:25:48.657 cpu : usr=99.01%, sys=0.63%, ctx=16, majf=0, minf=39 00:25:48.657 IO depths : 1=6.0%, 2=12.2%, 4=25.1%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:25:48.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.657 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.657 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.657 filename1: (groupid=0, jobs=1): err= 0: pid=1836263: Sat Apr 27 00:59:39 2024 00:25:48.657 read: IOPS=68, BW=273KiB/s (279kB/s)(2748KiB/10080msec) 00:25:48.657 slat (nsec): min=6321, max=43186, avg=11423.15, stdev=4930.07 00:25:48.657 clat (msec): min=3, max=427, avg=234.66, stdev=84.35 00:25:48.657 lat (msec): min=3, max=427, avg=234.67, stdev=84.35 00:25:48.657 clat percentiles (msec): 00:25:48.657 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 57], 20.00th=[ 211], 00:25:48.657 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 264], 00:25:48.657 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 313], 00:25:48.657 | 99.00th=[ 418], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:25:48.657 | 99.99th=[ 426] 00:25:48.657 bw ( KiB/s): min= 127, max= 729, per=4.72%, avg=268.00, stdev=124.15, samples=20 00:25:48.657 iops : min= 31, max= 182, avg=66.65, stdev=31.08, samples=20 00:25:48.657 lat (msec) : 4=0.15%, 10=7.57%, 20=0.87%, 100=1.75%, 250=33.48% 00:25:48.657 lat (msec) : 500=56.19% 00:25:48.657 cpu : usr=99.00%, sys=0.63%, ctx=13, majf=0, minf=42 00:25:48.657 IO depths : 1=2.5%, 2=7.6%, 4=23.1%, 8=56.9%, 16=9.9%, 32=0.0%, >=64=0.0% 00:25:48.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 issued rwts: total=687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.658 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.658 filename1: (groupid=0, jobs=1): err= 0: pid=1836264: Sat Apr 27 00:59:39 2024 00:25:48.658 read: IOPS=47, BW=191KiB/s (195kB/s)(1912KiB/10031msec) 00:25:48.658 slat (nsec): min=6249, max=48538, avg=10787.39, stdev=5317.51 00:25:48.658 clat (msec): min=180, max=537, avg=335.29, stdev=84.33 00:25:48.658 lat (msec): min=180, max=537, avg=335.30, stdev=84.33 00:25:48.658 clat percentiles (msec): 00:25:48.658 | 1.00th=[ 182], 5.00th=[ 207], 10.00th=[ 239], 20.00th=[ 268], 00:25:48.658 | 30.00th=[ 275], 40.00th=[ 288], 50.00th=[ 330], 60.00th=[ 338], 00:25:48.658 | 70.00th=[ 397], 80.00th=[ 435], 90.00th=[ 447], 95.00th=[ 451], 00:25:48.658 | 99.00th=[ 523], 99.50th=[ 523], 99.90th=[ 542], 99.95th=[ 542], 00:25:48.658 | 99.99th=[ 542] 00:25:48.658 bw ( KiB/s): min= 111, max= 256, per=3.28%, avg=186.65, stdev=61.87, samples=20 00:25:48.658 iops : min= 27, max= 64, avg=46.25, stdev=15.46, samples=20 00:25:48.658 lat (msec) : 250=11.72%, 500=85.36%, 750=2.93% 00:25:48.658 cpu : usr=98.39%, sys=0.89%, ctx=16, majf=0, minf=36 00:25:48.658 IO depths : 1=2.7%, 2=8.4%, 4=23.0%, 8=56.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:25:48.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 issued rwts: total=478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.658 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.658 filename2: (groupid=0, jobs=1): err= 0: pid=1836265: Sat Apr 27 00:59:39 2024 00:25:48.658 read: IOPS=62, BW=249KiB/s (255kB/s)(2504KiB/10054msec) 00:25:48.658 slat (nsec): min=6342, max=35302, avg=11058.71, stdev=4555.90 00:25:48.658 clat (msec): min=166, max=426, avg=256.89, stdev=49.32 00:25:48.658 lat (msec): min=166, max=426, avg=256.90, stdev=49.32 00:25:48.658 clat percentiles (msec): 00:25:48.658 | 1.00th=[ 167], 5.00th=[ 186], 10.00th=[ 199], 20.00th=[ 213], 00:25:48.658 | 30.00th=[ 243], 40.00th=[ 253], 50.00th=[ 257], 60.00th=[ 264], 00:25:48.658 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 380], 00:25:48.658 | 99.00th=[ 422], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:25:48.658 | 99.99th=[ 426] 00:25:48.658 bw ( KiB/s): min= 128, max= 368, per=4.28%, avg=243.55, stdev=46.82, samples=20 00:25:48.658 iops : min= 32, max= 92, avg=60.55, stdev=11.81, samples=20 00:25:48.658 lat (msec) : 250=37.06%, 500=62.94% 00:25:48.658 cpu : usr=99.02%, sys=0.62%, ctx=9, majf=0, minf=44 00:25:48.658 IO depths : 1=2.6%, 2=7.7%, 4=21.6%, 8=58.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:25:48.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 issued rwts: total=626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.658 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.658 filename2: (groupid=0, jobs=1): err= 0: pid=1836266: Sat Apr 27 00:59:39 2024 00:25:48.658 read: IOPS=72, BW=289KiB/s (296kB/s)(2920KiB/10087msec) 00:25:48.658 slat (nsec): min=6112, max=41841, avg=9744.93, stdev=3813.10 00:25:48.658 clat (msec): min=5, max=477, avg=220.99, stdev=86.87 00:25:48.658 lat (msec): min=5, max=477, avg=221.00, stdev=86.87 00:25:48.658 clat percentiles (msec): 00:25:48.658 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 74], 20.00th=[ 171], 00:25:48.658 | 30.00th=[ 209], 40.00th=[ 234], 50.00th=[ 249], 60.00th=[ 253], 00:25:48.658 | 70.00th=[ 262], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 296], 00:25:48.658 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 477], 99.95th=[ 477], 00:25:48.658 | 99.99th=[ 477] 00:25:48.658 bw ( KiB/s): min= 144, max= 896, per=5.02%, avg=285.15, stdev=152.29, samples=20 00:25:48.658 iops : min= 36, max= 224, avg=70.95, stdev=38.17, samples=20 00:25:48.658 lat (msec) : 10=5.34%, 20=2.19%, 50=2.19%, 100=1.23%, 250=40.27% 00:25:48.658 lat (msec) : 500=48.77% 00:25:48.658 cpu : usr=98.98%, sys=0.64%, ctx=13, majf=0, minf=47 00:25:48.658 IO depths : 1=1.1%, 2=3.3%, 4=12.2%, 8=71.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:25:48.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 issued rwts: total=730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.658 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.658 filename2: (groupid=0, jobs=1): err= 0: pid=1836267: Sat Apr 27 00:59:39 2024 00:25:48.658 read: IOPS=44, BW=178KiB/s (182kB/s)(1784KiB/10030msec) 00:25:48.658 slat (nsec): min=6174, max=39854, avg=9648.64, stdev=5203.92 00:25:48.658 clat (msec): min=196, max=487, avg=359.67, stdev=72.94 00:25:48.658 lat (msec): min=197, max=487, avg=359.68, stdev=72.94 00:25:48.658 clat percentiles (msec): 00:25:48.658 | 1.00th=[ 211], 5.00th=[ 247], 10.00th=[ 262], 20.00th=[ 279], 00:25:48.658 | 30.00th=[ 321], 40.00th=[ 338], 50.00th=[ 368], 60.00th=[ 393], 00:25:48.658 | 70.00th=[ 430], 80.00th=[ 439], 90.00th=[ 447], 95.00th=[ 451], 00:25:48.658 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 489], 99.95th=[ 489], 00:25:48.658 | 99.99th=[ 489] 00:25:48.658 bw ( KiB/s): min= 127, max= 272, per=3.01%, avg=171.45, stdev=57.06, samples=20 00:25:48.658 iops : min= 31, max= 68, avg=42.45, stdev=14.27, samples=20 00:25:48.658 lat (msec) : 250=7.62%, 500=92.38% 00:25:48.658 cpu : usr=99.15%, sys=0.48%, ctx=5, majf=0, minf=37 00:25:48.658 IO depths : 1=2.0%, 2=8.3%, 4=25.1%, 8=54.3%, 16=10.3%, 32=0.0%, >=64=0.0% 00:25:48.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 issued rwts: total=446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.658 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.658 filename2: (groupid=0, jobs=1): err= 0: pid=1836268: Sat Apr 27 00:59:39 2024 00:25:48.658 read: IOPS=59, BW=240KiB/s (246kB/s)(2408KiB/10036msec) 00:25:48.658 slat (nsec): min=6057, max=50993, avg=9727.97, stdev=4867.01 00:25:48.658 clat (msec): min=178, max=473, avg=266.56, stdev=44.30 00:25:48.658 lat (msec): min=178, max=473, avg=266.57, stdev=44.30 00:25:48.658 clat percentiles (msec): 00:25:48.658 | 1.00th=[ 180], 5.00th=[ 215], 10.00th=[ 224], 20.00th=[ 243], 00:25:48.658 | 30.00th=[ 251], 40.00th=[ 257], 50.00th=[ 266], 60.00th=[ 271], 00:25:48.658 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 300], 95.00th=[ 355], 00:25:48.658 | 99.00th=[ 439], 99.50th=[ 443], 99.90th=[ 472], 99.95th=[ 472], 00:25:48.658 | 99.99th=[ 472] 00:25:48.658 bw ( KiB/s): min= 127, max= 256, per=4.12%, avg=234.00, stdev=44.12, samples=20 00:25:48.658 iops : min= 31, max= 64, avg=58.20, stdev=11.03, samples=20 00:25:48.658 lat (msec) : 250=29.90%, 500=70.10% 00:25:48.658 cpu : usr=99.23%, sys=0.40%, ctx=10, majf=0, minf=37 00:25:48.658 IO depths : 1=2.7%, 2=9.0%, 4=25.2%, 8=53.7%, 16=9.5%, 32=0.0%, >=64=0.0% 00:25:48.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 issued rwts: total=602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.658 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.658 filename2: (groupid=0, jobs=1): err= 0: pid=1836269: Sat Apr 27 00:59:39 2024 00:25:48.658 read: IOPS=43, BW=172KiB/s (177kB/s)(1728KiB/10025msec) 00:25:48.658 slat (nsec): min=8627, max=55957, avg=20154.38, stdev=8799.80 00:25:48.658 clat (msec): min=224, max=537, avg=371.11, stdev=64.54 00:25:48.658 lat (msec): min=224, max=537, avg=371.13, stdev=64.54 00:25:48.658 clat percentiles (msec): 00:25:48.658 | 1.00th=[ 247], 5.00th=[ 268], 10.00th=[ 271], 20.00th=[ 313], 00:25:48.658 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 380], 60.00th=[ 401], 00:25:48.658 | 70.00th=[ 426], 80.00th=[ 439], 90.00th=[ 439], 95.00th=[ 447], 00:25:48.658 | 99.00th=[ 485], 99.50th=[ 502], 99.90th=[ 542], 99.95th=[ 542], 00:25:48.658 | 99.99th=[ 542] 00:25:48.658 bw ( KiB/s): min= 127, max= 256, per=2.91%, avg=165.90, stdev=58.70, samples=20 00:25:48.658 iops : min= 31, max= 64, avg=41.10, stdev=14.76, samples=20 00:25:48.658 lat (msec) : 250=4.17%, 500=94.91%, 750=0.93% 00:25:48.658 cpu : usr=99.17%, sys=0.45%, ctx=15, majf=0, minf=34 00:25:48.658 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:25:48.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.658 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.658 filename2: (groupid=0, jobs=1): err= 0: pid=1836270: Sat Apr 27 00:59:39 2024 00:25:48.658 read: IOPS=61, BW=247KiB/s (253kB/s)(2480KiB/10048msec) 00:25:48.658 slat (nsec): min=6392, max=54436, avg=11300.17, stdev=5793.33 00:25:48.658 clat (msec): min=160, max=509, avg=259.21, stdev=50.82 00:25:48.658 lat (msec): min=160, max=509, avg=259.22, stdev=50.82 00:25:48.658 clat percentiles (msec): 00:25:48.658 | 1.00th=[ 161], 5.00th=[ 180], 10.00th=[ 201], 20.00th=[ 224], 00:25:48.658 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 257], 60.00th=[ 266], 00:25:48.658 | 70.00th=[ 271], 80.00th=[ 275], 90.00th=[ 296], 95.00th=[ 372], 00:25:48.658 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 510], 99.95th=[ 510], 00:25:48.658 | 99.99th=[ 510] 00:25:48.658 bw ( KiB/s): min= 128, max= 384, per=4.25%, avg=241.10, stdev=53.67, samples=20 00:25:48.658 iops : min= 32, max= 96, avg=59.90, stdev=13.43, samples=20 00:25:48.658 lat (msec) : 250=40.32%, 500=59.35%, 750=0.32% 00:25:48.658 cpu : usr=99.11%, sys=0.54%, ctx=12, majf=0, minf=34 00:25:48.658 IO depths : 1=3.2%, 2=9.2%, 4=24.2%, 8=54.4%, 16=9.0%, 32=0.0%, >=64=0.0% 00:25:48.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.658 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.658 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.658 filename2: (groupid=0, jobs=1): err= 0: pid=1836271: Sat Apr 27 00:59:39 2024 00:25:48.658 read: IOPS=73, BW=293KiB/s (300kB/s)(2952KiB/10091msec) 00:25:48.658 slat (nsec): min=4202, max=45774, avg=9957.80, stdev=4364.19 00:25:48.658 clat (msec): min=4, max=445, avg=218.48, stdev=79.29 00:25:48.658 lat (msec): min=4, max=445, avg=218.49, stdev=79.29 00:25:48.658 clat percentiles (msec): 00:25:48.658 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 38], 20.00th=[ 197], 00:25:48.658 | 30.00th=[ 209], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 253], 00:25:48.658 | 70.00th=[ 264], 80.00th=[ 271], 90.00th=[ 279], 95.00th=[ 284], 00:25:48.659 | 99.00th=[ 359], 99.50th=[ 426], 99.90th=[ 447], 99.95th=[ 447], 00:25:48.659 | 99.99th=[ 447] 00:25:48.659 bw ( KiB/s): min= 127, max= 848, per=5.08%, avg=288.25, stdev=140.69, samples=20 00:25:48.659 iops : min= 31, max= 212, avg=71.65, stdev=35.32, samples=20 00:25:48.659 lat (msec) : 10=5.69%, 20=2.17%, 50=2.17%, 250=44.17%, 500=45.80% 00:25:48.659 cpu : usr=99.02%, sys=0.59%, ctx=13, majf=0, minf=43 00:25:48.659 IO depths : 1=3.4%, 2=9.2%, 4=23.7%, 8=54.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:25:48.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.659 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.659 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.659 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.659 filename2: (groupid=0, jobs=1): err= 0: pid=1836272: Sat Apr 27 00:59:39 2024 00:25:48.659 read: IOPS=43, BW=172KiB/s (177kB/s)(1728KiB/10023msec) 00:25:48.659 slat (nsec): min=6649, max=92972, avg=29237.29, stdev=22625.79 00:25:48.659 clat (msec): min=220, max=527, avg=370.96, stdev=64.03 00:25:48.659 lat (msec): min=221, max=527, avg=370.99, stdev=64.02 00:25:48.659 clat percentiles (msec): 00:25:48.659 | 1.00th=[ 247], 5.00th=[ 266], 10.00th=[ 271], 20.00th=[ 313], 00:25:48.659 | 30.00th=[ 338], 40.00th=[ 351], 50.00th=[ 380], 60.00th=[ 401], 00:25:48.659 | 70.00th=[ 426], 80.00th=[ 439], 90.00th=[ 439], 95.00th=[ 447], 00:25:48.659 | 99.00th=[ 447], 99.50th=[ 523], 99.90th=[ 527], 99.95th=[ 527], 00:25:48.659 | 99.99th=[ 527] 00:25:48.659 bw ( KiB/s): min= 127, max= 256, per=2.91%, avg=165.90, stdev=58.81, samples=20 00:25:48.659 iops : min= 31, max= 64, avg=41.10, stdev=14.87, samples=20 00:25:48.659 lat (msec) : 250=3.70%, 500=95.37%, 750=0.93% 00:25:48.659 cpu : usr=99.06%, sys=0.60%, ctx=14, majf=0, minf=45 00:25:48.659 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:25:48.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.659 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.659 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.659 latency : target=0, window=0, percentile=100.00%, depth=16 00:25:48.659 00:25:48.659 Run status group 0 (all jobs): 00:25:48.659 READ: bw=5673KiB/s (5809kB/s), 172KiB/s-293KiB/s (177kB/s-300kB/s), io=56.1MiB (58.8MB), run=10023-10125msec 00:25:48.659 00:59:39 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:25:48.659 00:59:39 -- target/dif.sh@43 -- # local sub 00:25:48.659 00:59:39 -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.659 00:59:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:48.659 00:59:39 -- target/dif.sh@36 -- # local sub_id=0 00:25:48.659 00:59:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.659 00:59:39 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:48.659 00:59:39 -- target/dif.sh@36 -- # local sub_id=1 00:25:48.659 00:59:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.659 00:59:39 -- target/dif.sh@46 -- # destroy_subsystem 2 00:25:48.659 00:59:39 -- target/dif.sh@36 -- # local sub_id=2 00:25:48.659 00:59:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@115 -- # NULL_DIF=1 00:25:48.659 00:59:39 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:25:48.659 00:59:39 -- target/dif.sh@115 -- # numjobs=2 00:25:48.659 00:59:39 -- target/dif.sh@115 -- # iodepth=8 00:25:48.659 00:59:39 -- target/dif.sh@115 -- # runtime=5 00:25:48.659 00:59:39 -- target/dif.sh@115 -- # files=1 00:25:48.659 00:59:39 -- target/dif.sh@117 -- # create_subsystems 0 1 00:25:48.659 00:59:39 -- target/dif.sh@28 -- # local sub 00:25:48.659 00:59:39 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.659 00:59:39 -- target/dif.sh@31 -- # create_subsystem 0 00:25:48.659 00:59:39 -- target/dif.sh@18 -- # local sub_id=0 00:25:48.659 00:59:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 bdev_null0 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 [2024-04-27 00:59:39.785393] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.659 00:59:39 -- target/dif.sh@31 -- # create_subsystem 1 00:25:48.659 00:59:39 -- target/dif.sh@18 -- # local sub_id=1 00:25:48.659 00:59:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 bdev_null1 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.659 00:59:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.659 00:59:39 -- common/autotest_common.sh@10 -- # set +x 00:25:48.659 00:59:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.659 00:59:39 -- target/dif.sh@118 -- # fio /dev/fd/62 00:25:48.659 00:59:39 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:25:48.659 00:59:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:48.659 00:59:39 -- nvmf/common.sh@521 -- # config=() 00:25:48.659 00:59:39 -- nvmf/common.sh@521 -- # local subsystem config 00:25:48.659 00:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:48.659 00:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:48.659 { 00:25:48.659 "params": { 00:25:48.659 "name": "Nvme$subsystem", 00:25:48.659 "trtype": "$TEST_TRANSPORT", 00:25:48.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.659 "adrfam": "ipv4", 00:25:48.659 "trsvcid": "$NVMF_PORT", 00:25:48.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.659 "hdgst": ${hdgst:-false}, 00:25:48.659 "ddgst": ${ddgst:-false} 00:25:48.659 }, 00:25:48.659 "method": "bdev_nvme_attach_controller" 00:25:48.659 } 00:25:48.659 EOF 00:25:48.659 )") 00:25:48.659 00:59:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.659 00:59:39 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.659 00:59:39 -- target/dif.sh@82 -- # gen_fio_conf 00:25:48.659 00:59:39 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:48.659 00:59:39 -- target/dif.sh@54 -- # local file 00:25:48.659 00:59:39 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.659 00:59:39 -- nvmf/common.sh@543 -- # cat 00:25:48.659 00:59:39 -- target/dif.sh@56 -- # cat 00:25:48.659 00:59:39 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:48.659 00:59:39 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:48.659 00:59:39 -- common/autotest_common.sh@1327 -- # shift 00:25:48.659 00:59:39 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:48.659 00:59:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.659 00:59:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:48.659 00:59:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:48.659 { 00:25:48.659 "params": { 00:25:48.659 "name": "Nvme$subsystem", 00:25:48.659 "trtype": "$TEST_TRANSPORT", 00:25:48.659 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.659 "adrfam": "ipv4", 00:25:48.659 "trsvcid": "$NVMF_PORT", 00:25:48.659 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.659 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.660 "hdgst": ${hdgst:-false}, 00:25:48.660 "ddgst": ${ddgst:-false} 00:25:48.660 }, 00:25:48.660 "method": "bdev_nvme_attach_controller" 00:25:48.660 } 00:25:48.660 EOF 00:25:48.660 )") 00:25:48.660 00:59:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:48.660 00:59:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:48.660 00:59:39 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:48.660 00:59:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:48.660 00:59:39 -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.660 00:59:39 -- nvmf/common.sh@543 -- # cat 00:25:48.660 00:59:39 -- target/dif.sh@73 -- # cat 00:25:48.660 00:59:39 -- nvmf/common.sh@545 -- # jq . 00:25:48.660 00:59:39 -- target/dif.sh@72 -- # (( file++ )) 00:25:48.660 00:59:39 -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.660 00:59:39 -- nvmf/common.sh@546 -- # IFS=, 00:25:48.660 00:59:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:48.660 "params": { 00:25:48.660 "name": "Nvme0", 00:25:48.660 "trtype": "tcp", 00:25:48.660 "traddr": "10.0.0.2", 00:25:48.660 "adrfam": "ipv4", 00:25:48.660 "trsvcid": "4420", 00:25:48.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:48.660 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:48.660 "hdgst": false, 00:25:48.660 "ddgst": false 00:25:48.660 }, 00:25:48.660 "method": "bdev_nvme_attach_controller" 00:25:48.660 },{ 00:25:48.660 "params": { 00:25:48.660 "name": "Nvme1", 00:25:48.660 "trtype": "tcp", 00:25:48.660 "traddr": "10.0.0.2", 00:25:48.660 "adrfam": "ipv4", 00:25:48.660 "trsvcid": "4420", 00:25:48.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:48.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:48.660 "hdgst": false, 00:25:48.660 "ddgst": false 00:25:48.660 }, 00:25:48.660 "method": "bdev_nvme_attach_controller" 00:25:48.660 }' 00:25:48.660 00:59:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:48.660 00:59:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:48.660 00:59:39 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.660 00:59:39 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:48.660 00:59:39 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:48.660 00:59:39 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:48.660 00:59:39 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:48.660 00:59:39 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:48.660 00:59:39 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:48.660 00:59:39 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.660 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:48.660 ... 00:25:48.660 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:25:48.660 ... 00:25:48.660 fio-3.35 00:25:48.660 Starting 4 threads 00:25:48.660 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.934 00:25:53.934 filename0: (groupid=0, jobs=1): err= 0: pid=1838268: Sat Apr 27 00:59:45 2024 00:25:53.934 read: IOPS=2559, BW=20.0MiB/s (21.0MB/s)(100MiB/5003msec) 00:25:53.934 slat (nsec): min=6013, max=25342, avg=8400.66, stdev=2464.68 00:25:53.934 clat (usec): min=1841, max=49605, avg=3104.82, stdev=1229.17 00:25:53.934 lat (usec): min=1848, max=49629, avg=3113.22, stdev=1229.27 00:25:53.934 clat percentiles (usec): 00:25:53.934 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2769], 00:25:53.934 | 30.00th=[ 2933], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:25:53.934 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3556], 95.00th=[ 3752], 00:25:53.934 | 99.00th=[ 4146], 99.50th=[ 4228], 99.90th=[ 4817], 99.95th=[49546], 00:25:53.934 | 99.99th=[49546] 00:25:53.934 bw ( KiB/s): min=18512, max=20864, per=25.11%, avg=20478.40, stdev=698.79, samples=10 00:25:53.934 iops : min= 2314, max= 2608, avg=2559.80, stdev=87.35, samples=10 00:25:53.934 lat (msec) : 2=0.22%, 4=97.95%, 10=1.77%, 50=0.06% 00:25:53.934 cpu : usr=96.24%, sys=3.42%, ctx=6, majf=0, minf=9 00:25:53.934 IO depths : 1=0.1%, 2=0.9%, 4=65.6%, 8=33.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.934 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.934 issued rwts: total=12804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:53.934 filename0: (groupid=0, jobs=1): err= 0: pid=1838269: Sat Apr 27 00:59:45 2024 00:25:53.934 read: IOPS=2560, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:25:53.934 slat (nsec): min=6029, max=26427, avg=8535.67, stdev=2560.56 00:25:53.934 clat (usec): min=1367, max=5309, avg=3103.48, stdev=435.75 00:25:53.934 lat (usec): min=1373, max=5335, avg=3112.02, stdev=435.69 00:25:53.934 clat percentiles (usec): 00:25:53.934 | 1.00th=[ 2073], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2769], 00:25:53.934 | 30.00th=[ 2966], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3163], 00:25:53.934 | 70.00th=[ 3294], 80.00th=[ 3425], 90.00th=[ 3621], 95.00th=[ 3818], 00:25:53.934 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 4686], 99.95th=[ 5014], 00:25:53.934 | 99.99th=[ 5080] 00:25:53.934 bw ( KiB/s): min=20160, max=21184, per=25.12%, avg=20488.89, stdev=324.48, samples=9 00:25:53.934 iops : min= 2520, max= 2648, avg=2561.11, stdev=40.56, samples=9 00:25:53.934 lat (msec) : 2=0.87%, 4=96.55%, 10=2.59% 00:25:53.934 cpu : usr=96.60%, sys=3.10%, ctx=7, majf=0, minf=9 00:25:53.934 IO depths : 1=0.1%, 2=1.0%, 4=65.7%, 8=33.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.934 complete : 0=0.0%, 4=96.6%, 8=3.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.934 issued rwts: total=12804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:53.934 filename1: (groupid=0, jobs=1): err= 0: pid=1838270: Sat Apr 27 00:59:45 2024 00:25:53.934 read: IOPS=2565, BW=20.0MiB/s (21.0MB/s)(100MiB/5002msec) 00:25:53.934 slat (nsec): min=6031, max=25707, avg=8511.15, stdev=2520.64 00:25:53.934 clat (usec): min=1121, max=6181, avg=3097.33, stdev=427.73 00:25:53.934 lat (usec): min=1127, max=6206, avg=3105.84, stdev=427.76 00:25:53.934 clat percentiles (usec): 00:25:53.934 | 1.00th=[ 2073], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2769], 00:25:53.934 | 30.00th=[ 2966], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:25:53.934 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3589], 95.00th=[ 3818], 00:25:53.934 | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 4686], 99.95th=[ 5866], 00:25:53.934 | 99.99th=[ 6128] 00:25:53.934 bw ( KiB/s): min=20192, max=21104, per=25.15%, avg=20516.80, stdev=264.10, samples=10 00:25:53.934 iops : min= 2524, max= 2638, avg=2564.60, stdev=33.01, samples=10 00:25:53.934 lat (msec) : 2=0.77%, 4=96.81%, 10=2.42% 00:25:53.934 cpu : usr=96.54%, sys=3.14%, ctx=6, majf=0, minf=10 00:25:53.934 IO depths : 1=0.1%, 2=0.9%, 4=66.0%, 8=33.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.934 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.934 issued rwts: total=12831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:53.934 filename1: (groupid=0, jobs=1): err= 0: pid=1838271: Sat Apr 27 00:59:45 2024 00:25:53.934 read: IOPS=2512, BW=19.6MiB/s (20.6MB/s)(98.2MiB/5002msec) 00:25:53.934 slat (nsec): min=6047, max=25783, avg=8411.64, stdev=2495.66 00:25:53.934 clat (usec): min=1682, max=47142, avg=3163.86, stdev=1178.84 00:25:53.934 lat (usec): min=1689, max=47167, avg=3172.27, stdev=1178.91 00:25:53.934 clat percentiles (usec): 00:25:53.934 | 1.00th=[ 2180], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2835], 00:25:53.934 | 30.00th=[ 3032], 40.00th=[ 3097], 50.00th=[ 3097], 60.00th=[ 3163], 00:25:53.934 | 70.00th=[ 3294], 80.00th=[ 3425], 90.00th=[ 3621], 95.00th=[ 3818], 00:25:53.934 | 99.00th=[ 4178], 99.50th=[ 4424], 99.90th=[ 4817], 99.95th=[46924], 00:25:53.934 | 99.99th=[46924] 00:25:53.934 bw ( KiB/s): min=18356, max=20400, per=24.64%, avg=20098.00, stdev=627.67, samples=10 00:25:53.934 iops : min= 2294, max= 2550, avg=2512.20, stdev=78.61, samples=10 00:25:53.934 lat (msec) : 2=0.18%, 4=97.32%, 10=2.44%, 50=0.06% 00:25:53.934 cpu : usr=96.22%, sys=3.40%, ctx=6, majf=0, minf=9 00:25:53.934 IO depths : 1=0.1%, 2=0.9%, 4=65.5%, 8=33.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:53.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.934 complete : 0=0.0%, 4=96.9%, 8=3.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.934 issued rwts: total=12566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:53.934 00:25:53.934 Run status group 0 (all jobs): 00:25:53.934 READ: bw=79.6MiB/s (83.5MB/s), 19.6MiB/s-20.0MiB/s (20.6MB/s-21.0MB/s), io=398MiB (418MB), run=5001-5003msec 00:25:53.934 00:59:46 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:25:53.934 00:59:46 -- target/dif.sh@43 -- # local sub 00:25:53.934 00:59:46 -- target/dif.sh@45 -- # for sub in "$@" 00:25:53.934 00:59:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:53.934 00:59:46 -- target/dif.sh@36 -- # local sub_id=0 00:25:53.934 00:59:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:53.934 00:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.934 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.934 00:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.934 00:59:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:53.935 00:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.935 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.935 00:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.935 00:59:46 -- target/dif.sh@45 -- # for sub in "$@" 00:25:53.935 00:59:46 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:53.935 00:59:46 -- target/dif.sh@36 -- # local sub_id=1 00:25:53.935 00:59:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.935 00:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.935 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.935 00:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.935 00:59:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:53.935 00:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.935 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.935 00:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.935 00:25:53.935 real 0m24.016s 00:25:53.935 user 4m53.766s 00:25:53.935 sys 0m3.672s 00:25:53.935 00:59:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:53.935 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.935 ************************************ 00:25:53.935 END TEST fio_dif_rand_params 00:25:53.935 ************************************ 00:25:53.935 00:59:46 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:25:53.935 00:59:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:53.935 00:59:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.935 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.935 ************************************ 00:25:53.935 START TEST fio_dif_digest 00:25:53.935 ************************************ 00:25:53.935 00:59:46 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:25:53.935 00:59:46 -- target/dif.sh@123 -- # local NULL_DIF 00:25:53.935 00:59:46 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:25:53.935 00:59:46 -- target/dif.sh@125 -- # local hdgst ddgst 00:25:53.935 00:59:46 -- target/dif.sh@127 -- # NULL_DIF=3 00:25:53.935 00:59:46 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:25:53.935 00:59:46 -- target/dif.sh@127 -- # numjobs=3 00:25:53.935 00:59:46 -- target/dif.sh@127 -- # iodepth=3 00:25:53.935 00:59:46 -- target/dif.sh@127 -- # runtime=10 00:25:53.935 00:59:46 -- target/dif.sh@128 -- # hdgst=true 00:25:53.935 00:59:46 -- target/dif.sh@128 -- # ddgst=true 00:25:53.935 00:59:46 -- target/dif.sh@130 -- # create_subsystems 0 00:25:53.935 00:59:46 -- target/dif.sh@28 -- # local sub 00:25:53.935 00:59:46 -- target/dif.sh@30 -- # for sub in "$@" 00:25:53.935 00:59:46 -- target/dif.sh@31 -- # create_subsystem 0 00:25:53.935 00:59:46 -- target/dif.sh@18 -- # local sub_id=0 00:25:53.935 00:59:46 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:53.935 00:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.935 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.935 bdev_null0 00:25:53.935 00:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.935 00:59:46 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:53.935 00:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.935 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.935 00:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.935 00:59:46 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:53.935 00:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.935 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.935 00:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.935 00:59:46 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.935 00:59:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.935 00:59:46 -- common/autotest_common.sh@10 -- # set +x 00:25:53.935 [2024-04-27 00:59:46.298877] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.935 00:59:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.935 00:59:46 -- target/dif.sh@131 -- # fio /dev/fd/62 00:25:53.935 00:59:46 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:25:53.935 00:59:46 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:53.935 00:59:46 -- nvmf/common.sh@521 -- # config=() 00:25:53.935 00:59:46 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.935 00:59:46 -- nvmf/common.sh@521 -- # local subsystem config 00:25:53.935 00:59:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:53.935 00:59:46 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.935 00:59:46 -- target/dif.sh@82 -- # gen_fio_conf 00:25:53.935 00:59:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:53.935 { 00:25:53.935 "params": { 00:25:53.935 "name": "Nvme$subsystem", 00:25:53.935 "trtype": "$TEST_TRANSPORT", 00:25:53.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:53.935 "adrfam": "ipv4", 00:25:53.935 "trsvcid": "$NVMF_PORT", 00:25:53.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:53.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:53.935 "hdgst": ${hdgst:-false}, 00:25:53.935 "ddgst": ${ddgst:-false} 00:25:53.935 }, 00:25:53.935 "method": "bdev_nvme_attach_controller" 00:25:53.935 } 00:25:53.935 EOF 00:25:53.935 )") 00:25:53.935 00:59:46 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:53.935 00:59:46 -- target/dif.sh@54 -- # local file 00:25:53.935 00:59:46 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:53.935 00:59:46 -- target/dif.sh@56 -- # cat 00:25:53.935 00:59:46 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:53.935 00:59:46 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:53.935 00:59:46 -- common/autotest_common.sh@1327 -- # shift 00:25:53.935 00:59:46 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:53.935 00:59:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.935 00:59:46 -- nvmf/common.sh@543 -- # cat 00:25:53.935 00:59:46 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:53.935 00:59:46 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:53.935 00:59:46 -- target/dif.sh@72 -- # (( file <= files )) 00:25:53.935 00:59:46 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:53.935 00:59:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:53.935 00:59:46 -- nvmf/common.sh@545 -- # jq . 00:25:53.935 00:59:46 -- nvmf/common.sh@546 -- # IFS=, 00:25:53.935 00:59:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:53.935 "params": { 00:25:53.935 "name": "Nvme0", 00:25:53.935 "trtype": "tcp", 00:25:53.935 "traddr": "10.0.0.2", 00:25:53.935 "adrfam": "ipv4", 00:25:53.935 "trsvcid": "4420", 00:25:53.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:53.935 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:53.935 "hdgst": true, 00:25:53.935 "ddgst": true 00:25:53.935 }, 00:25:53.935 "method": "bdev_nvme_attach_controller" 00:25:53.935 }' 00:25:53.935 00:59:46 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:53.935 00:59:46 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:53.935 00:59:46 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.935 00:59:46 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:53.935 00:59:46 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:53.935 00:59:46 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:53.935 00:59:46 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:53.935 00:59:46 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:53.935 00:59:46 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:53.935 00:59:46 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:54.194 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:54.194 ... 00:25:54.194 fio-3.35 00:25:54.194 Starting 3 threads 00:25:54.194 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.398 00:26:06.398 filename0: (groupid=0, jobs=1): err= 0: pid=1839497: Sat Apr 27 00:59:57 2024 00:26:06.398 read: IOPS=253, BW=31.7MiB/s (33.2MB/s)(318MiB/10021msec) 00:26:06.398 slat (nsec): min=6393, max=41900, avg=11846.79, stdev=4921.89 00:26:06.398 clat (usec): min=5221, max=94917, avg=11815.33, stdev=9445.12 00:26:06.398 lat (usec): min=5228, max=94928, avg=11827.18, stdev=9445.62 00:26:06.398 clat percentiles (usec): 00:26:06.398 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 7635], 00:26:06.398 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[10683], 00:26:06.398 | 70.00th=[11338], 80.00th=[11994], 90.00th=[13435], 95.00th=[16909], 00:26:06.398 | 99.00th=[54789], 99.50th=[55837], 99.90th=[56886], 99.95th=[57410], 00:26:06.398 | 99.99th=[94897] 00:26:06.398 bw ( KiB/s): min=25088, max=45312, per=37.38%, avg=32502.00, stdev=5378.22, samples=20 00:26:06.398 iops : min= 196, max= 354, avg=253.80, stdev=42.02, samples=20 00:26:06.398 lat (msec) : 10=47.07%, 20=48.37%, 50=0.28%, 100=4.29% 00:26:06.398 cpu : usr=95.49%, sys=4.10%, ctx=17, majf=0, minf=136 00:26:06.398 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.398 issued rwts: total=2541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:06.398 filename0: (groupid=0, jobs=1): err= 0: pid=1839498: Sat Apr 27 00:59:57 2024 00:26:06.398 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(256MiB/10033msec) 00:26:06.398 slat (nsec): min=6351, max=40609, avg=12624.91, stdev=5299.12 00:26:06.398 clat (usec): min=5115, max=58924, avg=14701.47, stdev=12345.55 00:26:06.398 lat (usec): min=5123, max=58937, avg=14714.10, stdev=12345.65 00:26:06.398 clat percentiles (usec): 00:26:06.398 | 1.00th=[ 5735], 5.00th=[ 6718], 10.00th=[ 7963], 20.00th=[ 9110], 00:26:06.398 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:26:06.398 | 70.00th=[12518], 80.00th=[13435], 90.00th=[16319], 95.00th=[53740], 00:26:06.398 | 99.00th=[56886], 99.50th=[57410], 99.90th=[58459], 99.95th=[58983], 00:26:06.398 | 99.99th=[58983] 00:26:06.398 bw ( KiB/s): min=18176, max=32833, per=30.07%, avg=26140.85, stdev=3371.26, samples=20 00:26:06.398 iops : min= 142, max= 256, avg=204.20, stdev=26.29, samples=20 00:26:06.398 lat (msec) : 10=28.51%, 20=62.84%, 50=0.49%, 100=8.17% 00:26:06.398 cpu : usr=95.84%, sys=3.82%, ctx=19, majf=0, minf=114 00:26:06.398 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.398 issued rwts: total=2045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:06.398 filename0: (groupid=0, jobs=1): err= 0: pid=1839499: Sat Apr 27 00:59:57 2024 00:26:06.398 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(280MiB/10045msec) 00:26:06.398 slat (nsec): min=6292, max=60355, avg=12592.06, stdev=4734.14 00:26:06.398 clat (usec): min=5320, max=95686, avg=13436.72, stdev=11818.07 00:26:06.398 lat (usec): min=5327, max=95699, avg=13449.31, stdev=11818.31 00:26:06.398 clat percentiles (usec): 00:26:06.398 | 1.00th=[ 5604], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 8160], 00:26:06.398 | 30.00th=[ 9241], 40.00th=[10159], 50.00th=[10814], 60.00th=[11207], 00:26:06.398 | 70.00th=[11731], 80.00th=[12387], 90.00th=[14222], 95.00th=[51643], 00:26:06.398 | 99.00th=[55313], 99.50th=[56886], 99.90th=[92799], 99.95th=[95945], 00:26:06.398 | 99.99th=[95945] 00:26:06.398 bw ( KiB/s): min=19200, max=41472, per=32.90%, avg=28608.00, stdev=5305.20, samples=20 00:26:06.398 iops : min= 150, max= 324, avg=223.50, stdev=41.45, samples=20 00:26:06.398 lat (msec) : 10=37.24%, 20=55.39%, 50=0.72%, 100=6.66% 00:26:06.398 cpu : usr=95.60%, sys=3.99%, ctx=22, majf=0, minf=185 00:26:06.398 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.398 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:06.398 00:26:06.399 Run status group 0 (all jobs): 00:26:06.399 READ: bw=84.9MiB/s (89.0MB/s), 25.5MiB/s-31.7MiB/s (26.7MB/s-33.2MB/s), io=853MiB (894MB), run=10021-10045msec 00:26:06.399 00:59:57 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:06.399 00:59:57 -- target/dif.sh@43 -- # local sub 00:26:06.399 00:59:57 -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.399 00:59:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:06.399 00:59:57 -- target/dif.sh@36 -- # local sub_id=0 00:26:06.399 00:59:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:06.399 00:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.399 00:59:57 -- common/autotest_common.sh@10 -- # set +x 00:26:06.399 00:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.399 00:59:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:06.399 00:59:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:06.399 00:59:57 -- common/autotest_common.sh@10 -- # set +x 00:26:06.399 00:59:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:06.399 00:26:06.399 real 0m11.138s 00:26:06.399 user 0m35.272s 00:26:06.399 sys 0m1.530s 00:26:06.399 00:59:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:06.399 00:59:57 -- common/autotest_common.sh@10 -- # set +x 00:26:06.399 ************************************ 00:26:06.399 END TEST fio_dif_digest 00:26:06.399 ************************************ 00:26:06.399 00:59:57 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:06.399 00:59:57 -- target/dif.sh@147 -- # nvmftestfini 00:26:06.399 00:59:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:06.399 00:59:57 -- nvmf/common.sh@117 -- # sync 00:26:06.399 00:59:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:06.399 00:59:57 -- nvmf/common.sh@120 -- # set +e 00:26:06.399 00:59:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:06.399 00:59:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:06.399 rmmod nvme_tcp 00:26:06.399 rmmod nvme_fabrics 00:26:06.399 rmmod nvme_keyring 00:26:06.399 00:59:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:06.399 00:59:57 -- nvmf/common.sh@124 -- # set -e 00:26:06.399 00:59:57 -- nvmf/common.sh@125 -- # return 0 00:26:06.399 00:59:57 -- nvmf/common.sh@478 -- # '[' -n 1830856 ']' 00:26:06.399 00:59:57 -- nvmf/common.sh@479 -- # killprocess 1830856 00:26:06.399 00:59:57 -- common/autotest_common.sh@936 -- # '[' -z 1830856 ']' 00:26:06.399 00:59:57 -- common/autotest_common.sh@940 -- # kill -0 1830856 00:26:06.399 00:59:57 -- common/autotest_common.sh@941 -- # uname 00:26:06.399 00:59:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:06.399 00:59:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1830856 00:26:06.399 00:59:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:06.399 00:59:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:06.399 00:59:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1830856' 00:26:06.399 killing process with pid 1830856 00:26:06.399 00:59:57 -- common/autotest_common.sh@955 -- # kill 1830856 00:26:06.399 00:59:57 -- common/autotest_common.sh@960 -- # wait 1830856 00:26:06.399 00:59:57 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:26:06.399 00:59:57 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:07.810 Waiting for block devices as requested 00:26:07.810 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:07.810 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:07.810 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:08.067 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:08.068 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:08.068 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:08.068 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:08.325 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:08.325 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:08.325 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:08.325 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:08.583 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:08.583 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:08.583 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:08.842 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:08.842 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:08.842 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:09.101 01:00:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:09.101 01:00:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:09.101 01:00:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:09.101 01:00:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:09.101 01:00:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.101 01:00:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:09.101 01:00:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.011 01:00:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:11.011 00:26:11.011 real 1m13.354s 00:26:11.011 user 7m12.710s 00:26:11.011 sys 0m17.190s 00:26:11.011 01:00:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:11.012 01:00:03 -- common/autotest_common.sh@10 -- # set +x 00:26:11.012 ************************************ 00:26:11.012 END TEST nvmf_dif 00:26:11.012 ************************************ 00:26:11.012 01:00:03 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:11.012 01:00:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:11.012 01:00:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:11.012 01:00:03 -- common/autotest_common.sh@10 -- # set +x 00:26:11.270 ************************************ 00:26:11.270 START TEST nvmf_abort_qd_sizes 00:26:11.270 ************************************ 00:26:11.270 01:00:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:11.270 * Looking for test storage... 00:26:11.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:11.270 01:00:03 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.270 01:00:03 -- nvmf/common.sh@7 -- # uname -s 00:26:11.270 01:00:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.270 01:00:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.270 01:00:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.270 01:00:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.270 01:00:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.270 01:00:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.270 01:00:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.270 01:00:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.270 01:00:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.270 01:00:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.270 01:00:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:11.270 01:00:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:11.270 01:00:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.270 01:00:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.270 01:00:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.270 01:00:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.270 01:00:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.270 01:00:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.270 01:00:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.270 01:00:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.270 01:00:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.270 01:00:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.270 01:00:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.270 01:00:03 -- paths/export.sh@5 -- # export PATH 00:26:11.270 01:00:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.270 01:00:03 -- nvmf/common.sh@47 -- # : 0 00:26:11.270 01:00:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:11.270 01:00:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:11.270 01:00:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.270 01:00:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.270 01:00:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.270 01:00:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:11.270 01:00:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:11.270 01:00:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:11.270 01:00:03 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:11.270 01:00:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:11.270 01:00:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.270 01:00:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:11.270 01:00:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:11.270 01:00:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:11.270 01:00:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.270 01:00:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:11.270 01:00:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.270 01:00:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:11.270 01:00:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:11.270 01:00:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:11.270 01:00:03 -- common/autotest_common.sh@10 -- # set +x 00:26:16.549 01:00:09 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:16.549 01:00:09 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:16.549 01:00:09 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:16.549 01:00:09 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:16.549 01:00:09 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:16.549 01:00:09 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:16.549 01:00:09 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:16.549 01:00:09 -- nvmf/common.sh@295 -- # net_devs=() 00:26:16.549 01:00:09 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:16.549 01:00:09 -- nvmf/common.sh@296 -- # e810=() 00:26:16.549 01:00:09 -- nvmf/common.sh@296 -- # local -ga e810 00:26:16.549 01:00:09 -- nvmf/common.sh@297 -- # x722=() 00:26:16.549 01:00:09 -- nvmf/common.sh@297 -- # local -ga x722 00:26:16.549 01:00:09 -- nvmf/common.sh@298 -- # mlx=() 00:26:16.549 01:00:09 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:16.549 01:00:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.549 01:00:09 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:16.549 01:00:09 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:16.549 01:00:09 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:16.549 01:00:09 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:16.549 01:00:09 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:16.549 01:00:09 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:16.549 01:00:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.550 01:00:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:16.550 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:16.550 01:00:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:16.550 01:00:09 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:16.550 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:16.550 01:00:09 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:16.550 01:00:09 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.550 01:00:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.550 01:00:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:16.550 01:00:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.550 01:00:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:16.550 Found net devices under 0000:86:00.0: cvl_0_0 00:26:16.550 01:00:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.550 01:00:09 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:16.550 01:00:09 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.550 01:00:09 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:16.550 01:00:09 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.550 01:00:09 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:16.550 Found net devices under 0000:86:00.1: cvl_0_1 00:26:16.550 01:00:09 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.550 01:00:09 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:16.550 01:00:09 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:16.550 01:00:09 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:16.550 01:00:09 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:16.550 01:00:09 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.550 01:00:09 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.550 01:00:09 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.550 01:00:09 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:16.550 01:00:09 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.550 01:00:09 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.550 01:00:09 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:16.550 01:00:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.550 01:00:09 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.550 01:00:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:16.550 01:00:09 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:16.550 01:00:09 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.550 01:00:09 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.550 01:00:09 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.550 01:00:09 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.550 01:00:09 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:16.550 01:00:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.809 01:00:09 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.809 01:00:09 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.809 01:00:09 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:16.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:26:16.810 00:26:16.810 --- 10.0.0.2 ping statistics --- 00:26:16.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.810 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:26:16.810 01:00:09 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:26:16.810 00:26:16.810 --- 10.0.0.1 ping statistics --- 00:26:16.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.810 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:26:16.810 01:00:09 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.810 01:00:09 -- nvmf/common.sh@411 -- # return 0 00:26:16.810 01:00:09 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:26:16.810 01:00:09 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:19.340 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:19.340 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:19.607 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:20.175 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:20.433 01:00:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.433 01:00:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:20.433 01:00:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:20.433 01:00:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.433 01:00:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:20.433 01:00:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:20.433 01:00:12 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:20.433 01:00:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:20.433 01:00:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:20.433 01:00:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.433 01:00:12 -- nvmf/common.sh@470 -- # nvmfpid=1847816 00:26:20.433 01:00:12 -- nvmf/common.sh@471 -- # waitforlisten 1847816 00:26:20.433 01:00:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:20.433 01:00:12 -- common/autotest_common.sh@817 -- # '[' -z 1847816 ']' 00:26:20.433 01:00:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.433 01:00:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:20.433 01:00:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.433 01:00:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:20.433 01:00:12 -- common/autotest_common.sh@10 -- # set +x 00:26:20.433 [2024-04-27 01:00:13.041746] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:26:20.433 [2024-04-27 01:00:13.041789] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.433 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.433 [2024-04-27 01:00:13.100589] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.692 [2024-04-27 01:00:13.182106] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.692 [2024-04-27 01:00:13.182148] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.692 [2024-04-27 01:00:13.182156] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.692 [2024-04-27 01:00:13.182163] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.692 [2024-04-27 01:00:13.182169] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.692 [2024-04-27 01:00:13.182209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.692 [2024-04-27 01:00:13.182224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.692 [2024-04-27 01:00:13.182240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.692 [2024-04-27 01:00:13.182242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.269 01:00:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:21.269 01:00:13 -- common/autotest_common.sh@850 -- # return 0 00:26:21.269 01:00:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:21.269 01:00:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:21.269 01:00:13 -- common/autotest_common.sh@10 -- # set +x 00:26:21.269 01:00:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.269 01:00:13 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:21.269 01:00:13 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:21.269 01:00:13 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:21.269 01:00:13 -- scripts/common.sh@309 -- # local bdf bdfs 00:26:21.269 01:00:13 -- scripts/common.sh@310 -- # local nvmes 00:26:21.269 01:00:13 -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:26:21.269 01:00:13 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:21.269 01:00:13 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:21.269 01:00:13 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:26:21.269 01:00:13 -- scripts/common.sh@320 -- # uname -s 00:26:21.269 01:00:13 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:21.269 01:00:13 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:21.269 01:00:13 -- scripts/common.sh@325 -- # (( 1 )) 00:26:21.269 01:00:13 -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:26:21.269 01:00:13 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:21.269 01:00:13 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:26:21.269 01:00:13 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:21.269 01:00:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:21.269 01:00:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:21.269 01:00:13 -- common/autotest_common.sh@10 -- # set +x 00:26:21.529 ************************************ 00:26:21.529 START TEST spdk_target_abort 00:26:21.529 ************************************ 00:26:21.529 01:00:14 -- common/autotest_common.sh@1111 -- # spdk_target 00:26:21.529 01:00:14 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:21.529 01:00:14 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:26:21.529 01:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:21.529 01:00:14 -- common/autotest_common.sh@10 -- # set +x 00:26:24.812 spdk_targetn1 00:26:24.812 01:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:24.812 01:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.812 01:00:16 -- common/autotest_common.sh@10 -- # set +x 00:26:24.812 [2024-04-27 01:00:16.868522] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.812 01:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:24.812 01:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.812 01:00:16 -- common/autotest_common.sh@10 -- # set +x 00:26:24.812 01:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:24.812 01:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.812 01:00:16 -- common/autotest_common.sh@10 -- # set +x 00:26:24.812 01:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:24.812 01:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.812 01:00:16 -- common/autotest_common.sh@10 -- # set +x 00:26:24.812 [2024-04-27 01:00:16.901449] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.812 01:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:24.812 01:00:16 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:24.812 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.115 Initializing NVMe Controllers 00:26:28.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:28.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:28.115 Initialization complete. Launching workers. 00:26:28.115 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5947, failed: 0 00:26:28.115 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1565, failed to submit 4382 00:26:28.115 success 934, unsuccess 631, failed 0 00:26:28.115 01:00:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:28.115 01:00:20 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:28.115 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.648 [2024-04-27 01:00:23.311120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5e30 is same with the state(5) to be set 00:26:30.648 [2024-04-27 01:00:23.311157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5e30 is same with the state(5) to be set 00:26:30.905 Initializing NVMe Controllers 00:26:30.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:30.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:30.905 Initialization complete. Launching workers. 00:26:30.905 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8547, failed: 0 00:26:30.905 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1243, failed to submit 7304 00:26:30.905 success 282, unsuccess 961, failed 0 00:26:30.905 01:00:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:30.905 01:00:23 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:30.905 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.208 Initializing NVMe Controllers 00:26:34.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:26:34.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:34.208 Initialization complete. Launching workers. 00:26:34.208 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33456, failed: 0 00:26:34.208 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2757, failed to submit 30699 00:26:34.208 success 680, unsuccess 2077, failed 0 00:26:34.208 01:00:26 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:26:34.208 01:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.208 01:00:26 -- common/autotest_common.sh@10 -- # set +x 00:26:34.208 01:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.208 01:00:26 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:34.208 01:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.208 01:00:26 -- common/autotest_common.sh@10 -- # set +x 00:26:35.583 01:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.583 01:00:27 -- target/abort_qd_sizes.sh@61 -- # killprocess 1847816 00:26:35.583 01:00:27 -- common/autotest_common.sh@936 -- # '[' -z 1847816 ']' 00:26:35.583 01:00:27 -- common/autotest_common.sh@940 -- # kill -0 1847816 00:26:35.583 01:00:27 -- common/autotest_common.sh@941 -- # uname 00:26:35.583 01:00:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:35.583 01:00:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1847816 00:26:35.583 01:00:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:35.583 01:00:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:35.583 01:00:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1847816' 00:26:35.583 killing process with pid 1847816 00:26:35.583 01:00:27 -- common/autotest_common.sh@955 -- # kill 1847816 00:26:35.583 01:00:27 -- common/autotest_common.sh@960 -- # wait 1847816 00:26:35.583 00:26:35.583 real 0m14.141s 00:26:35.583 user 0m56.753s 00:26:35.583 sys 0m2.169s 00:26:35.583 01:00:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:35.583 01:00:28 -- common/autotest_common.sh@10 -- # set +x 00:26:35.583 ************************************ 00:26:35.583 END TEST spdk_target_abort 00:26:35.583 ************************************ 00:26:35.583 01:00:28 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:26:35.583 01:00:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:35.583 01:00:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:35.583 01:00:28 -- common/autotest_common.sh@10 -- # set +x 00:26:35.842 ************************************ 00:26:35.842 START TEST kernel_target_abort 00:26:35.842 ************************************ 00:26:35.842 01:00:28 -- common/autotest_common.sh@1111 -- # kernel_target 00:26:35.842 01:00:28 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:26:35.842 01:00:28 -- nvmf/common.sh@717 -- # local ip 00:26:35.842 01:00:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:35.842 01:00:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:35.842 01:00:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.842 01:00:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.842 01:00:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:35.842 01:00:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.842 01:00:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:35.842 01:00:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:35.842 01:00:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:35.842 01:00:28 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:35.842 01:00:28 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:35.842 01:00:28 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:26:35.842 01:00:28 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:35.842 01:00:28 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:35.842 01:00:28 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:35.842 01:00:28 -- nvmf/common.sh@628 -- # local block nvme 00:26:35.842 01:00:28 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:26:35.842 01:00:28 -- nvmf/common.sh@631 -- # modprobe nvmet 00:26:35.842 01:00:28 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:35.842 01:00:28 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:38.383 Waiting for block devices as requested 00:26:38.383 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:38.383 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:38.383 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:38.641 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:38.641 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:38.641 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:38.641 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:38.899 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:38.899 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:38.899 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:38.899 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:39.157 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:39.157 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:39.157 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:39.416 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:39.416 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:39.416 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:39.674 01:00:32 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:39.674 01:00:32 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:39.674 01:00:32 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:26:39.674 01:00:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:39.675 01:00:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:39.675 01:00:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:39.675 01:00:32 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:26:39.675 01:00:32 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:39.675 01:00:32 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:39.675 No valid GPT data, bailing 00:26:39.675 01:00:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:39.675 01:00:32 -- scripts/common.sh@391 -- # pt= 00:26:39.675 01:00:32 -- scripts/common.sh@392 -- # return 1 00:26:39.675 01:00:32 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:26:39.675 01:00:32 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:26:39.675 01:00:32 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:39.675 01:00:32 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:39.675 01:00:32 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:39.675 01:00:32 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:39.675 01:00:32 -- nvmf/common.sh@656 -- # echo 1 00:26:39.675 01:00:32 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:26:39.675 01:00:32 -- nvmf/common.sh@658 -- # echo 1 00:26:39.675 01:00:32 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:26:39.675 01:00:32 -- nvmf/common.sh@661 -- # echo tcp 00:26:39.675 01:00:32 -- nvmf/common.sh@662 -- # echo 4420 00:26:39.675 01:00:32 -- nvmf/common.sh@663 -- # echo ipv4 00:26:39.675 01:00:32 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:39.675 01:00:32 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:39.675 00:26:39.675 Discovery Log Number of Records 2, Generation counter 2 00:26:39.675 =====Discovery Log Entry 0====== 00:26:39.675 trtype: tcp 00:26:39.675 adrfam: ipv4 00:26:39.675 subtype: current discovery subsystem 00:26:39.675 treq: not specified, sq flow control disable supported 00:26:39.675 portid: 1 00:26:39.675 trsvcid: 4420 00:26:39.675 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:39.675 traddr: 10.0.0.1 00:26:39.675 eflags: none 00:26:39.675 sectype: none 00:26:39.675 =====Discovery Log Entry 1====== 00:26:39.675 trtype: tcp 00:26:39.675 adrfam: ipv4 00:26:39.675 subtype: nvme subsystem 00:26:39.675 treq: not specified, sq flow control disable supported 00:26:39.675 portid: 1 00:26:39.675 trsvcid: 4420 00:26:39.675 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:39.675 traddr: 10.0.0.1 00:26:39.675 eflags: none 00:26:39.675 sectype: none 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:39.675 01:00:32 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:39.675 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.955 Initializing NVMe Controllers 00:26:42.955 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:42.955 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:42.955 Initialization complete. Launching workers. 00:26:42.955 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35556, failed: 0 00:26:42.955 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35556, failed to submit 0 00:26:42.955 success 0, unsuccess 35556, failed 0 00:26:42.955 01:00:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:42.955 01:00:35 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:42.956 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.240 Initializing NVMe Controllers 00:26:46.240 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:46.240 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:46.240 Initialization complete. Launching workers. 00:26:46.240 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72118, failed: 0 00:26:46.240 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18206, failed to submit 53912 00:26:46.240 success 0, unsuccess 18206, failed 0 00:26:46.240 01:00:38 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:46.240 01:00:38 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:46.240 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.558 Initializing NVMe Controllers 00:26:49.558 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:49.558 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:26:49.558 Initialization complete. Launching workers. 00:26:49.558 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70438, failed: 0 00:26:49.558 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17614, failed to submit 52824 00:26:49.558 success 0, unsuccess 17614, failed 0 00:26:49.558 01:00:41 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:26:49.558 01:00:41 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:49.558 01:00:41 -- nvmf/common.sh@675 -- # echo 0 00:26:49.558 01:00:41 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.558 01:00:41 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:49.558 01:00:41 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:49.558 01:00:41 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:49.558 01:00:41 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:26:49.558 01:00:41 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:26:49.558 01:00:41 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:51.465 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:51.465 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:51.465 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:51.465 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:51.465 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:51.465 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:51.465 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:51.465 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:51.465 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:51.465 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:51.465 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:51.725 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:51.725 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:51.725 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:51.725 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:51.725 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:52.662 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:52.662 00:26:52.662 real 0m16.785s 00:26:52.662 user 0m4.787s 00:26:52.662 sys 0m5.484s 00:26:52.662 01:00:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:52.662 01:00:45 -- common/autotest_common.sh@10 -- # set +x 00:26:52.662 ************************************ 00:26:52.662 END TEST kernel_target_abort 00:26:52.662 ************************************ 00:26:52.662 01:00:45 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:52.662 01:00:45 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:26:52.662 01:00:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:52.662 01:00:45 -- nvmf/common.sh@117 -- # sync 00:26:52.662 01:00:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.662 01:00:45 -- nvmf/common.sh@120 -- # set +e 00:26:52.662 01:00:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.662 01:00:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.662 rmmod nvme_tcp 00:26:52.662 rmmod nvme_fabrics 00:26:52.662 rmmod nvme_keyring 00:26:52.662 01:00:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.662 01:00:45 -- nvmf/common.sh@124 -- # set -e 00:26:52.662 01:00:45 -- nvmf/common.sh@125 -- # return 0 00:26:52.662 01:00:45 -- nvmf/common.sh@478 -- # '[' -n 1847816 ']' 00:26:52.662 01:00:45 -- nvmf/common.sh@479 -- # killprocess 1847816 00:26:52.662 01:00:45 -- common/autotest_common.sh@936 -- # '[' -z 1847816 ']' 00:26:52.662 01:00:45 -- common/autotest_common.sh@940 -- # kill -0 1847816 00:26:52.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1847816) - No such process 00:26:52.662 01:00:45 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1847816 is not found' 00:26:52.662 Process with pid 1847816 is not found 00:26:52.662 01:00:45 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:26:52.662 01:00:45 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:55.203 Waiting for block devices as requested 00:26:55.203 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:55.203 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:55.203 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:55.461 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:55.461 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:55.461 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:55.461 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:55.719 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:55.719 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:55.719 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:55.719 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:55.978 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:55.978 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:55.978 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:56.237 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:56.237 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:56.237 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:56.237 01:00:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:56.237 01:00:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:56.237 01:00:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:56.237 01:00:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:56.237 01:00:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.237 01:00:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:56.237 01:00:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.768 01:00:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:58.768 00:26:58.768 real 0m47.217s 00:26:58.768 user 1m5.616s 00:26:58.768 sys 0m15.834s 00:26:58.768 01:00:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:58.768 01:00:50 -- common/autotest_common.sh@10 -- # set +x 00:26:58.768 ************************************ 00:26:58.768 END TEST nvmf_abort_qd_sizes 00:26:58.768 ************************************ 00:26:58.768 01:00:51 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:58.768 01:00:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:58.768 01:00:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:58.768 01:00:51 -- common/autotest_common.sh@10 -- # set +x 00:26:58.768 ************************************ 00:26:58.768 START TEST keyring_file 00:26:58.768 ************************************ 00:26:58.768 01:00:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:58.768 * Looking for test storage... 00:26:58.768 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:26:58.768 01:00:51 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:26:58.768 01:00:51 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.768 01:00:51 -- nvmf/common.sh@7 -- # uname -s 00:26:58.768 01:00:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.769 01:00:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.769 01:00:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.769 01:00:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.769 01:00:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.769 01:00:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.769 01:00:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.769 01:00:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.769 01:00:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.769 01:00:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.769 01:00:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:58.769 01:00:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:58.769 01:00:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.769 01:00:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.769 01:00:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.769 01:00:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.769 01:00:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.769 01:00:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.769 01:00:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.769 01:00:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.769 01:00:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.769 01:00:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.769 01:00:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.769 01:00:51 -- paths/export.sh@5 -- # export PATH 00:26:58.769 01:00:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.769 01:00:51 -- nvmf/common.sh@47 -- # : 0 00:26:58.769 01:00:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:58.769 01:00:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:58.769 01:00:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.769 01:00:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.769 01:00:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.769 01:00:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:58.769 01:00:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:58.769 01:00:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:58.769 01:00:51 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:58.769 01:00:51 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:58.769 01:00:51 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:58.769 01:00:51 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:58.769 01:00:51 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:58.769 01:00:51 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:58.769 01:00:51 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:58.769 01:00:51 -- keyring/common.sh@15 -- # local name key digest path 00:26:58.769 01:00:51 -- keyring/common.sh@17 -- # name=key0 00:26:58.769 01:00:51 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:58.769 01:00:51 -- keyring/common.sh@17 -- # digest=0 00:26:58.769 01:00:51 -- keyring/common.sh@18 -- # mktemp 00:26:58.769 01:00:51 -- keyring/common.sh@18 -- # path=/tmp/tmp.LDgbhzG0kG 00:26:58.769 01:00:51 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:58.769 01:00:51 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:58.769 01:00:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:58.769 01:00:51 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:26:58.769 01:00:51 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:26:58.769 01:00:51 -- nvmf/common.sh@693 -- # digest=0 00:26:58.769 01:00:51 -- nvmf/common.sh@694 -- # python - 00:26:58.769 01:00:51 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LDgbhzG0kG 00:26:58.769 01:00:51 -- keyring/common.sh@23 -- # echo /tmp/tmp.LDgbhzG0kG 00:26:58.769 01:00:51 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.LDgbhzG0kG 00:26:58.769 01:00:51 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:58.769 01:00:51 -- keyring/common.sh@15 -- # local name key digest path 00:26:58.769 01:00:51 -- keyring/common.sh@17 -- # name=key1 00:26:58.769 01:00:51 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:58.769 01:00:51 -- keyring/common.sh@17 -- # digest=0 00:26:58.769 01:00:51 -- keyring/common.sh@18 -- # mktemp 00:26:58.769 01:00:51 -- keyring/common.sh@18 -- # path=/tmp/tmp.5Jb9lwBKiG 00:26:58.769 01:00:51 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:58.769 01:00:51 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:58.769 01:00:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:58.769 01:00:51 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:26:58.769 01:00:51 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:26:58.769 01:00:51 -- nvmf/common.sh@693 -- # digest=0 00:26:58.769 01:00:51 -- nvmf/common.sh@694 -- # python - 00:26:58.769 01:00:51 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5Jb9lwBKiG 00:26:58.769 01:00:51 -- keyring/common.sh@23 -- # echo /tmp/tmp.5Jb9lwBKiG 00:26:58.769 01:00:51 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5Jb9lwBKiG 00:26:58.769 01:00:51 -- keyring/file.sh@30 -- # tgtpid=1856609 00:26:58.769 01:00:51 -- keyring/file.sh@32 -- # waitforlisten 1856609 00:26:58.769 01:00:51 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:26:58.769 01:00:51 -- common/autotest_common.sh@817 -- # '[' -z 1856609 ']' 00:26:58.769 01:00:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.769 01:00:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:58.769 01:00:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.769 01:00:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:58.769 01:00:51 -- common/autotest_common.sh@10 -- # set +x 00:26:58.769 [2024-04-27 01:00:51.433331] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:26:58.769 [2024-04-27 01:00:51.433377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856609 ] 00:26:58.769 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.028 [2024-04-27 01:00:51.487929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.028 [2024-04-27 01:00:51.557760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.595 01:00:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:59.595 01:00:52 -- common/autotest_common.sh@850 -- # return 0 00:26:59.595 01:00:52 -- keyring/file.sh@33 -- # rpc_cmd 00:26:59.595 01:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.595 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:26:59.595 [2024-04-27 01:00:52.220367] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.595 null0 00:26:59.595 [2024-04-27 01:00:52.252421] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:59.595 [2024-04-27 01:00:52.252744] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:59.595 [2024-04-27 01:00:52.260438] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:59.595 01:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.595 01:00:52 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:59.595 01:00:52 -- common/autotest_common.sh@638 -- # local es=0 00:26:59.595 01:00:52 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:59.595 01:00:52 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:59.595 01:00:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:59.595 01:00:52 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:59.595 01:00:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:59.595 01:00:52 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:59.595 01:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.595 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:26:59.595 [2024-04-27 01:00:52.276483] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:26:59.595 { 00:26:59.595 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:59.595 "secure_channel": false, 00:26:59.595 "listen_address": { 00:26:59.595 "trtype": "tcp", 00:26:59.595 "traddr": "127.0.0.1", 00:26:59.595 "trsvcid": "4420" 00:26:59.595 }, 00:26:59.595 "method": "nvmf_subsystem_add_listener", 00:26:59.595 "req_id": 1 00:26:59.595 } 00:26:59.595 Got JSON-RPC error response 00:26:59.595 response: 00:26:59.595 { 00:26:59.595 "code": -32602, 00:26:59.595 "message": "Invalid parameters" 00:26:59.595 } 00:26:59.595 01:00:52 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:59.595 01:00:52 -- common/autotest_common.sh@641 -- # es=1 00:26:59.595 01:00:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:59.595 01:00:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:59.595 01:00:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:59.595 01:00:52 -- keyring/file.sh@46 -- # bperfpid=1856646 00:26:59.595 01:00:52 -- keyring/file.sh@48 -- # waitforlisten 1856646 /var/tmp/bperf.sock 00:26:59.595 01:00:52 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:59.595 01:00:52 -- common/autotest_common.sh@817 -- # '[' -z 1856646 ']' 00:26:59.595 01:00:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:59.595 01:00:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:59.595 01:00:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:59.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:59.595 01:00:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:59.595 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:26:59.854 [2024-04-27 01:00:52.329075] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:26:59.854 [2024-04-27 01:00:52.329122] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856646 ] 00:26:59.854 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.854 [2024-04-27 01:00:52.383426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.854 [2024-04-27 01:00:52.460973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.789 01:00:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:00.789 01:00:53 -- common/autotest_common.sh@850 -- # return 0 00:27:00.789 01:00:53 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LDgbhzG0kG 00:27:00.789 01:00:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LDgbhzG0kG 00:27:00.789 01:00:53 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5Jb9lwBKiG 00:27:00.789 01:00:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5Jb9lwBKiG 00:27:01.047 01:00:53 -- keyring/file.sh@51 -- # get_key key0 00:27:01.047 01:00:53 -- keyring/file.sh@51 -- # jq -r .path 00:27:01.047 01:00:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:01.047 01:00:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:01.047 01:00:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:01.047 01:00:53 -- keyring/file.sh@51 -- # [[ /tmp/tmp.LDgbhzG0kG == \/\t\m\p\/\t\m\p\.\L\D\g\b\h\z\G\0\k\G ]] 00:27:01.047 01:00:53 -- keyring/file.sh@52 -- # get_key key1 00:27:01.047 01:00:53 -- keyring/file.sh@52 -- # jq -r .path 00:27:01.047 01:00:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:01.047 01:00:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:01.047 01:00:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:01.311 01:00:53 -- keyring/file.sh@52 -- # [[ /tmp/tmp.5Jb9lwBKiG == \/\t\m\p\/\t\m\p\.\5\J\b\9\l\w\B\K\i\G ]] 00:27:01.311 01:00:53 -- keyring/file.sh@53 -- # get_refcnt key0 00:27:01.311 01:00:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:01.311 01:00:53 -- keyring/common.sh@12 -- # get_key key0 00:27:01.311 01:00:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:01.311 01:00:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:01.311 01:00:53 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:01.569 01:00:54 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:01.569 01:00:54 -- keyring/file.sh@54 -- # get_refcnt key1 00:27:01.569 01:00:54 -- keyring/common.sh@12 -- # get_key key1 00:27:01.569 01:00:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:01.569 01:00:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:01.569 01:00:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:01.569 01:00:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:01.569 01:00:54 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:01.569 01:00:54 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:01.569 01:00:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:01.828 [2024-04-27 01:00:54.338014] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:01.828 nvme0n1 00:27:01.828 01:00:54 -- keyring/file.sh@59 -- # get_refcnt key0 00:27:01.828 01:00:54 -- keyring/common.sh@12 -- # get_key key0 00:27:01.828 01:00:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:01.828 01:00:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:01.828 01:00:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:01.828 01:00:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:02.087 01:00:54 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:02.087 01:00:54 -- keyring/file.sh@60 -- # get_refcnt key1 00:27:02.087 01:00:54 -- keyring/common.sh@12 -- # get_key key1 00:27:02.087 01:00:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:02.087 01:00:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:02.087 01:00:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:02.087 01:00:54 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:02.087 01:00:54 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:02.087 01:00:54 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:02.345 Running I/O for 1 seconds... 00:27:03.280 00:27:03.280 Latency(us) 00:27:03.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.280 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:03.280 nvme0n1 : 1.02 4693.88 18.34 0.00 0.00 27019.82 8434.20 44222.55 00:27:03.280 =================================================================================================================== 00:27:03.280 Total : 4693.88 18.34 0.00 0.00 27019.82 8434.20 44222.55 00:27:03.280 0 00:27:03.280 01:00:55 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:03.280 01:00:55 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:03.538 01:00:56 -- keyring/file.sh@65 -- # get_refcnt key0 00:27:03.538 01:00:56 -- keyring/common.sh@12 -- # get_key key0 00:27:03.538 01:00:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:03.538 01:00:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:03.538 01:00:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:03.538 01:00:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:03.796 01:00:56 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:03.796 01:00:56 -- keyring/file.sh@66 -- # get_refcnt key1 00:27:03.796 01:00:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:03.796 01:00:56 -- keyring/common.sh@12 -- # get_key key1 00:27:03.796 01:00:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:03.796 01:00:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:03.796 01:00:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:03.796 01:00:56 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:03.796 01:00:56 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:03.796 01:00:56 -- common/autotest_common.sh@638 -- # local es=0 00:27:03.796 01:00:56 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:03.796 01:00:56 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:27:03.796 01:00:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:03.796 01:00:56 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:27:03.796 01:00:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:03.796 01:00:56 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:03.796 01:00:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:04.055 [2024-04-27 01:00:56.613562] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:04.055 [2024-04-27 01:00:56.613917] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248dbd0 (107): Transport endpoint is not connected 00:27:04.055 [2024-04-27 01:00:56.614912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248dbd0 (9): Bad file descriptor 00:27:04.055 [2024-04-27 01:00:56.615911] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:04.055 [2024-04-27 01:00:56.615923] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:04.055 [2024-04-27 01:00:56.615930] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:04.055 request: 00:27:04.055 { 00:27:04.055 "name": "nvme0", 00:27:04.055 "trtype": "tcp", 00:27:04.055 "traddr": "127.0.0.1", 00:27:04.055 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:04.055 "adrfam": "ipv4", 00:27:04.055 "trsvcid": "4420", 00:27:04.055 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:04.055 "psk": "key1", 00:27:04.056 "method": "bdev_nvme_attach_controller", 00:27:04.056 "req_id": 1 00:27:04.056 } 00:27:04.056 Got JSON-RPC error response 00:27:04.056 response: 00:27:04.056 { 00:27:04.056 "code": -32602, 00:27:04.056 "message": "Invalid parameters" 00:27:04.056 } 00:27:04.056 01:00:56 -- common/autotest_common.sh@641 -- # es=1 00:27:04.056 01:00:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:04.056 01:00:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:04.056 01:00:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:04.056 01:00:56 -- keyring/file.sh@71 -- # get_refcnt key0 00:27:04.056 01:00:56 -- keyring/common.sh@12 -- # get_key key0 00:27:04.056 01:00:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:04.056 01:00:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:04.056 01:00:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:04.056 01:00:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:04.314 01:00:56 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:04.314 01:00:56 -- keyring/file.sh@72 -- # get_refcnt key1 00:27:04.314 01:00:56 -- keyring/common.sh@12 -- # get_key key1 00:27:04.315 01:00:56 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:04.315 01:00:56 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:04.315 01:00:56 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:04.315 01:00:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:04.315 01:00:56 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:04.315 01:00:56 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:04.315 01:00:56 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:04.571 01:00:57 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:04.572 01:00:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:04.829 01:00:57 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:04.829 01:00:57 -- keyring/file.sh@77 -- # jq length 00:27:04.829 01:00:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.087 01:00:57 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:05.087 01:00:57 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.LDgbhzG0kG 00:27:05.087 01:00:57 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.LDgbhzG0kG 00:27:05.087 01:00:57 -- common/autotest_common.sh@638 -- # local es=0 00:27:05.087 01:00:57 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.LDgbhzG0kG 00:27:05.087 01:00:57 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:27:05.087 01:00:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:05.087 01:00:57 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:27:05.087 01:00:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:05.087 01:00:57 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LDgbhzG0kG 00:27:05.087 01:00:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LDgbhzG0kG 00:27:05.087 [2024-04-27 01:00:57.677883] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.LDgbhzG0kG': 0100660 00:27:05.087 [2024-04-27 01:00:57.677905] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:05.087 request: 00:27:05.087 { 00:27:05.087 "name": "key0", 00:27:05.087 "path": "/tmp/tmp.LDgbhzG0kG", 00:27:05.087 "method": "keyring_file_add_key", 00:27:05.087 "req_id": 1 00:27:05.087 } 00:27:05.087 Got JSON-RPC error response 00:27:05.087 response: 00:27:05.087 { 00:27:05.087 "code": -1, 00:27:05.087 "message": "Operation not permitted" 00:27:05.087 } 00:27:05.087 01:00:57 -- common/autotest_common.sh@641 -- # es=1 00:27:05.087 01:00:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:05.087 01:00:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:05.087 01:00:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:05.087 01:00:57 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.LDgbhzG0kG 00:27:05.087 01:00:57 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LDgbhzG0kG 00:27:05.087 01:00:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LDgbhzG0kG 00:27:05.345 01:00:57 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.LDgbhzG0kG 00:27:05.345 01:00:57 -- keyring/file.sh@88 -- # get_refcnt key0 00:27:05.345 01:00:57 -- keyring/common.sh@12 -- # get_key key0 00:27:05.345 01:00:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:05.345 01:00:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:05.345 01:00:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:05.345 01:00:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:05.345 01:00:58 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:05.345 01:00:58 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:05.345 01:00:58 -- common/autotest_common.sh@638 -- # local es=0 00:27:05.345 01:00:58 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:05.345 01:00:58 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:27:05.345 01:00:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:05.345 01:00:58 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:27:05.345 01:00:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:05.345 01:00:58 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:05.345 01:00:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:05.603 [2024-04-27 01:00:58.199252] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.LDgbhzG0kG': No such file or directory 00:27:05.603 [2024-04-27 01:00:58.199273] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:05.603 [2024-04-27 01:00:58.199292] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:05.603 [2024-04-27 01:00:58.199298] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:05.603 [2024-04-27 01:00:58.199304] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:05.603 request: 00:27:05.603 { 00:27:05.603 "name": "nvme0", 00:27:05.603 "trtype": "tcp", 00:27:05.603 "traddr": "127.0.0.1", 00:27:05.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:05.603 "adrfam": "ipv4", 00:27:05.603 "trsvcid": "4420", 00:27:05.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:05.603 "psk": "key0", 00:27:05.603 "method": "bdev_nvme_attach_controller", 00:27:05.603 "req_id": 1 00:27:05.603 } 00:27:05.603 Got JSON-RPC error response 00:27:05.603 response: 00:27:05.603 { 00:27:05.603 "code": -19, 00:27:05.603 "message": "No such device" 00:27:05.603 } 00:27:05.603 01:00:58 -- common/autotest_common.sh@641 -- # es=1 00:27:05.603 01:00:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:05.603 01:00:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:05.603 01:00:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:05.603 01:00:58 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:05.603 01:00:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:05.862 01:00:58 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:05.862 01:00:58 -- keyring/common.sh@15 -- # local name key digest path 00:27:05.862 01:00:58 -- keyring/common.sh@17 -- # name=key0 00:27:05.862 01:00:58 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:05.862 01:00:58 -- keyring/common.sh@17 -- # digest=0 00:27:05.862 01:00:58 -- keyring/common.sh@18 -- # mktemp 00:27:05.862 01:00:58 -- keyring/common.sh@18 -- # path=/tmp/tmp.XlwoBs3h3g 00:27:05.862 01:00:58 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:05.862 01:00:58 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:05.862 01:00:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:05.862 01:00:58 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:27:05.862 01:00:58 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:27:05.862 01:00:58 -- nvmf/common.sh@693 -- # digest=0 00:27:05.862 01:00:58 -- nvmf/common.sh@694 -- # python - 00:27:05.862 01:00:58 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XlwoBs3h3g 00:27:05.862 01:00:58 -- keyring/common.sh@23 -- # echo /tmp/tmp.XlwoBs3h3g 00:27:05.862 01:00:58 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.XlwoBs3h3g 00:27:05.862 01:00:58 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XlwoBs3h3g 00:27:05.862 01:00:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XlwoBs3h3g 00:27:06.120 01:00:58 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:06.120 01:00:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:06.377 nvme0n1 00:27:06.377 01:00:58 -- keyring/file.sh@99 -- # get_refcnt key0 00:27:06.377 01:00:58 -- keyring/common.sh@12 -- # get_key key0 00:27:06.377 01:00:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:06.377 01:00:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:06.377 01:00:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:06.377 01:00:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:06.377 01:00:59 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:06.377 01:00:59 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:06.377 01:00:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:06.636 01:00:59 -- keyring/file.sh@101 -- # get_key key0 00:27:06.636 01:00:59 -- keyring/file.sh@101 -- # jq -r .removed 00:27:06.636 01:00:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:06.636 01:00:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:06.636 01:00:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:06.895 01:00:59 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:06.895 01:00:59 -- keyring/file.sh@102 -- # get_refcnt key0 00:27:06.895 01:00:59 -- keyring/common.sh@12 -- # get_key key0 00:27:06.895 01:00:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:06.895 01:00:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:06.895 01:00:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:06.895 01:00:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:06.895 01:00:59 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:06.895 01:00:59 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:06.895 01:00:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:07.153 01:00:59 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:07.153 01:00:59 -- keyring/file.sh@104 -- # jq length 00:27:07.153 01:00:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:07.410 01:00:59 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:07.410 01:00:59 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XlwoBs3h3g 00:27:07.410 01:00:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XlwoBs3h3g 00:27:07.410 01:01:00 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5Jb9lwBKiG 00:27:07.410 01:01:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5Jb9lwBKiG 00:27:07.669 01:01:00 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:07.669 01:01:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:07.928 nvme0n1 00:27:07.928 01:01:00 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:07.928 01:01:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:08.187 01:01:00 -- keyring/file.sh@112 -- # config='{ 00:27:08.187 "subsystems": [ 00:27:08.187 { 00:27:08.187 "subsystem": "keyring", 00:27:08.187 "config": [ 00:27:08.187 { 00:27:08.187 "method": "keyring_file_add_key", 00:27:08.187 "params": { 00:27:08.187 "name": "key0", 00:27:08.187 "path": "/tmp/tmp.XlwoBs3h3g" 00:27:08.187 } 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "method": "keyring_file_add_key", 00:27:08.187 "params": { 00:27:08.187 "name": "key1", 00:27:08.187 "path": "/tmp/tmp.5Jb9lwBKiG" 00:27:08.187 } 00:27:08.187 } 00:27:08.187 ] 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "subsystem": "iobuf", 00:27:08.187 "config": [ 00:27:08.187 { 00:27:08.187 "method": "iobuf_set_options", 00:27:08.187 "params": { 00:27:08.187 "small_pool_count": 8192, 00:27:08.187 "large_pool_count": 1024, 00:27:08.187 "small_bufsize": 8192, 00:27:08.187 "large_bufsize": 135168 00:27:08.187 } 00:27:08.187 } 00:27:08.187 ] 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "subsystem": "sock", 00:27:08.187 "config": [ 00:27:08.187 { 00:27:08.187 "method": "sock_impl_set_options", 00:27:08.187 "params": { 00:27:08.187 "impl_name": "posix", 00:27:08.187 "recv_buf_size": 2097152, 00:27:08.187 "send_buf_size": 2097152, 00:27:08.187 "enable_recv_pipe": true, 00:27:08.187 "enable_quickack": false, 00:27:08.187 "enable_placement_id": 0, 00:27:08.187 "enable_zerocopy_send_server": true, 00:27:08.187 "enable_zerocopy_send_client": false, 00:27:08.187 "zerocopy_threshold": 0, 00:27:08.187 "tls_version": 0, 00:27:08.187 "enable_ktls": false 00:27:08.187 } 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "method": "sock_impl_set_options", 00:27:08.187 "params": { 00:27:08.187 "impl_name": "ssl", 00:27:08.187 "recv_buf_size": 4096, 00:27:08.187 "send_buf_size": 4096, 00:27:08.187 "enable_recv_pipe": true, 00:27:08.187 "enable_quickack": false, 00:27:08.187 "enable_placement_id": 0, 00:27:08.187 "enable_zerocopy_send_server": true, 00:27:08.187 "enable_zerocopy_send_client": false, 00:27:08.187 "zerocopy_threshold": 0, 00:27:08.187 "tls_version": 0, 00:27:08.187 "enable_ktls": false 00:27:08.187 } 00:27:08.187 } 00:27:08.187 ] 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "subsystem": "vmd", 00:27:08.187 "config": [] 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "subsystem": "accel", 00:27:08.187 "config": [ 00:27:08.187 { 00:27:08.187 "method": "accel_set_options", 00:27:08.187 "params": { 00:27:08.187 "small_cache_size": 128, 00:27:08.187 "large_cache_size": 16, 00:27:08.187 "task_count": 2048, 00:27:08.187 "sequence_count": 2048, 00:27:08.187 "buf_count": 2048 00:27:08.187 } 00:27:08.187 } 00:27:08.187 ] 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "subsystem": "bdev", 00:27:08.187 "config": [ 00:27:08.187 { 00:27:08.187 "method": "bdev_set_options", 00:27:08.187 "params": { 00:27:08.187 "bdev_io_pool_size": 65535, 00:27:08.187 "bdev_io_cache_size": 256, 00:27:08.187 "bdev_auto_examine": true, 00:27:08.187 "iobuf_small_cache_size": 128, 00:27:08.187 "iobuf_large_cache_size": 16 00:27:08.187 } 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "method": "bdev_raid_set_options", 00:27:08.187 "params": { 00:27:08.187 "process_window_size_kb": 1024 00:27:08.187 } 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "method": "bdev_iscsi_set_options", 00:27:08.187 "params": { 00:27:08.187 "timeout_sec": 30 00:27:08.187 } 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "method": "bdev_nvme_set_options", 00:27:08.187 "params": { 00:27:08.187 "action_on_timeout": "none", 00:27:08.187 "timeout_us": 0, 00:27:08.187 "timeout_admin_us": 0, 00:27:08.187 "keep_alive_timeout_ms": 10000, 00:27:08.187 "arbitration_burst": 0, 00:27:08.187 "low_priority_weight": 0, 00:27:08.187 "medium_priority_weight": 0, 00:27:08.187 "high_priority_weight": 0, 00:27:08.187 "nvme_adminq_poll_period_us": 10000, 00:27:08.187 "nvme_ioq_poll_period_us": 0, 00:27:08.187 "io_queue_requests": 512, 00:27:08.187 "delay_cmd_submit": true, 00:27:08.187 "transport_retry_count": 4, 00:27:08.187 "bdev_retry_count": 3, 00:27:08.187 "transport_ack_timeout": 0, 00:27:08.187 "ctrlr_loss_timeout_sec": 0, 00:27:08.187 "reconnect_delay_sec": 0, 00:27:08.187 "fast_io_fail_timeout_sec": 0, 00:27:08.187 "disable_auto_failback": false, 00:27:08.187 "generate_uuids": false, 00:27:08.187 "transport_tos": 0, 00:27:08.187 "nvme_error_stat": false, 00:27:08.187 "rdma_srq_size": 0, 00:27:08.187 "io_path_stat": false, 00:27:08.187 "allow_accel_sequence": false, 00:27:08.187 "rdma_max_cq_size": 0, 00:27:08.187 "rdma_cm_event_timeout_ms": 0, 00:27:08.187 "dhchap_digests": [ 00:27:08.187 "sha256", 00:27:08.187 "sha384", 00:27:08.187 "sha512" 00:27:08.187 ], 00:27:08.187 "dhchap_dhgroups": [ 00:27:08.187 "null", 00:27:08.187 "ffdhe2048", 00:27:08.187 "ffdhe3072", 00:27:08.187 "ffdhe4096", 00:27:08.187 "ffdhe6144", 00:27:08.187 "ffdhe8192" 00:27:08.187 ] 00:27:08.187 } 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "method": "bdev_nvme_attach_controller", 00:27:08.187 "params": { 00:27:08.187 "name": "nvme0", 00:27:08.187 "trtype": "TCP", 00:27:08.187 "adrfam": "IPv4", 00:27:08.187 "traddr": "127.0.0.1", 00:27:08.187 "trsvcid": "4420", 00:27:08.187 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:08.187 "prchk_reftag": false, 00:27:08.187 "prchk_guard": false, 00:27:08.187 "ctrlr_loss_timeout_sec": 0, 00:27:08.187 "reconnect_delay_sec": 0, 00:27:08.187 "fast_io_fail_timeout_sec": 0, 00:27:08.187 "psk": "key0", 00:27:08.187 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:08.187 "hdgst": false, 00:27:08.187 "ddgst": false 00:27:08.187 } 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "method": "bdev_nvme_set_hotplug", 00:27:08.187 "params": { 00:27:08.187 "period_us": 100000, 00:27:08.187 "enable": false 00:27:08.187 } 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "method": "bdev_wait_for_examine" 00:27:08.187 } 00:27:08.187 ] 00:27:08.187 }, 00:27:08.187 { 00:27:08.187 "subsystem": "nbd", 00:27:08.187 "config": [] 00:27:08.187 } 00:27:08.187 ] 00:27:08.188 }' 00:27:08.188 01:01:00 -- keyring/file.sh@114 -- # killprocess 1856646 00:27:08.188 01:01:00 -- common/autotest_common.sh@936 -- # '[' -z 1856646 ']' 00:27:08.188 01:01:00 -- common/autotest_common.sh@940 -- # kill -0 1856646 00:27:08.188 01:01:00 -- common/autotest_common.sh@941 -- # uname 00:27:08.188 01:01:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:08.188 01:01:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1856646 00:27:08.188 01:01:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:08.188 01:01:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:08.188 01:01:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1856646' 00:27:08.188 killing process with pid 1856646 00:27:08.188 01:01:00 -- common/autotest_common.sh@955 -- # kill 1856646 00:27:08.188 Received shutdown signal, test time was about 1.000000 seconds 00:27:08.188 00:27:08.188 Latency(us) 00:27:08.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.188 =================================================================================================================== 00:27:08.188 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:08.188 01:01:00 -- common/autotest_common.sh@960 -- # wait 1856646 00:27:08.446 01:01:01 -- keyring/file.sh@117 -- # bperfpid=1858256 00:27:08.446 01:01:01 -- keyring/file.sh@119 -- # waitforlisten 1858256 /var/tmp/bperf.sock 00:27:08.446 01:01:01 -- common/autotest_common.sh@817 -- # '[' -z 1858256 ']' 00:27:08.446 01:01:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:08.446 01:01:01 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:08.446 01:01:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:08.446 01:01:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:08.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:08.446 01:01:01 -- keyring/file.sh@115 -- # echo '{ 00:27:08.446 "subsystems": [ 00:27:08.446 { 00:27:08.446 "subsystem": "keyring", 00:27:08.446 "config": [ 00:27:08.446 { 00:27:08.446 "method": "keyring_file_add_key", 00:27:08.446 "params": { 00:27:08.446 "name": "key0", 00:27:08.446 "path": "/tmp/tmp.XlwoBs3h3g" 00:27:08.446 } 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "method": "keyring_file_add_key", 00:27:08.446 "params": { 00:27:08.446 "name": "key1", 00:27:08.446 "path": "/tmp/tmp.5Jb9lwBKiG" 00:27:08.446 } 00:27:08.446 } 00:27:08.446 ] 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "subsystem": "iobuf", 00:27:08.446 "config": [ 00:27:08.446 { 00:27:08.446 "method": "iobuf_set_options", 00:27:08.446 "params": { 00:27:08.446 "small_pool_count": 8192, 00:27:08.446 "large_pool_count": 1024, 00:27:08.446 "small_bufsize": 8192, 00:27:08.446 "large_bufsize": 135168 00:27:08.446 } 00:27:08.446 } 00:27:08.446 ] 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "subsystem": "sock", 00:27:08.446 "config": [ 00:27:08.446 { 00:27:08.446 "method": "sock_impl_set_options", 00:27:08.446 "params": { 00:27:08.446 "impl_name": "posix", 00:27:08.446 "recv_buf_size": 2097152, 00:27:08.446 "send_buf_size": 2097152, 00:27:08.446 "enable_recv_pipe": true, 00:27:08.446 "enable_quickack": false, 00:27:08.446 "enable_placement_id": 0, 00:27:08.446 "enable_zerocopy_send_server": true, 00:27:08.446 "enable_zerocopy_send_client": false, 00:27:08.446 "zerocopy_threshold": 0, 00:27:08.446 "tls_version": 0, 00:27:08.446 "enable_ktls": false 00:27:08.446 } 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "method": "sock_impl_set_options", 00:27:08.446 "params": { 00:27:08.446 "impl_name": "ssl", 00:27:08.446 "recv_buf_size": 4096, 00:27:08.446 "send_buf_size": 4096, 00:27:08.446 "enable_recv_pipe": true, 00:27:08.446 "enable_quickack": false, 00:27:08.446 "enable_placement_id": 0, 00:27:08.446 "enable_zerocopy_send_server": true, 00:27:08.446 "enable_zerocopy_send_client": false, 00:27:08.446 "zerocopy_threshold": 0, 00:27:08.446 "tls_version": 0, 00:27:08.446 "enable_ktls": false 00:27:08.446 } 00:27:08.446 } 00:27:08.446 ] 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "subsystem": "vmd", 00:27:08.446 "config": [] 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "subsystem": "accel", 00:27:08.446 "config": [ 00:27:08.446 { 00:27:08.446 "method": "accel_set_options", 00:27:08.446 "params": { 00:27:08.446 "small_cache_size": 128, 00:27:08.446 "large_cache_size": 16, 00:27:08.446 "task_count": 2048, 00:27:08.446 "sequence_count": 2048, 00:27:08.446 "buf_count": 2048 00:27:08.446 } 00:27:08.446 } 00:27:08.446 ] 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "subsystem": "bdev", 00:27:08.446 "config": [ 00:27:08.446 { 00:27:08.446 "method": "bdev_set_options", 00:27:08.446 "params": { 00:27:08.446 "bdev_io_pool_size": 65535, 00:27:08.446 "bdev_io_cache_size": 256, 00:27:08.446 "bdev_auto_examine": true, 00:27:08.446 "iobuf_small_cache_size": 128, 00:27:08.446 "iobuf_large_cache_size": 16 00:27:08.446 } 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "method": "bdev_raid_set_options", 00:27:08.446 "params": { 00:27:08.446 "process_window_size_kb": 1024 00:27:08.446 } 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "method": "bdev_iscsi_set_options", 00:27:08.446 "params": { 00:27:08.446 "timeout_sec": 30 00:27:08.446 } 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "method": "bdev_nvme_set_options", 00:27:08.446 "params": { 00:27:08.446 "action_on_timeout": "none", 00:27:08.446 "timeout_us": 0, 00:27:08.446 "timeout_admin_us": 0, 00:27:08.446 "keep_alive_timeout_ms": 10000, 00:27:08.446 "arbitration_burst": 0, 00:27:08.446 "low_priority_weight": 0, 00:27:08.446 "medium_priority_weight": 0, 00:27:08.446 "high_priority_weight": 0, 00:27:08.446 "nvme_adminq_poll_period_us": 10000, 00:27:08.446 "nvme_ioq_poll_period_us": 0, 00:27:08.446 "io_queue_requests": 512, 00:27:08.446 "delay_cmd_submit": true, 00:27:08.446 "transport_retry_count": 4, 00:27:08.446 "bdev_retry_count": 3, 00:27:08.446 "transport_ack_timeout": 0, 00:27:08.446 "ctrlr_loss_timeout_sec": 0, 00:27:08.446 "reconnect_delay_sec": 0, 00:27:08.446 "fast_io_fail_timeout_sec": 0, 00:27:08.446 "disable_auto_failback": false, 00:27:08.446 "generate_uuids": false, 00:27:08.446 "transport_tos": 0, 00:27:08.446 "nvme_error_stat": false, 00:27:08.446 "rdma_srq_size": 0, 00:27:08.446 "io_path_stat": false, 00:27:08.446 "allow_accel_sequence": false, 00:27:08.446 "rdma_max_cq_size": 0, 00:27:08.446 "rdma_cm_event_timeout_ms": 0, 00:27:08.446 "dhchap_digests": [ 00:27:08.446 "sha256", 00:27:08.446 "sha384", 00:27:08.446 "sha512" 00:27:08.446 ], 00:27:08.446 "dhchap_dhgroups": [ 00:27:08.446 "null", 00:27:08.446 "ffdhe2048", 00:27:08.446 "ffdhe3072", 00:27:08.446 "ffdhe4096", 00:27:08.446 "ffdhe6144", 00:27:08.446 "ffdhe8192" 00:27:08.446 ] 00:27:08.446 } 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "method": "bdev_nvme_attach_controller", 00:27:08.446 "params": { 00:27:08.446 "name": "nvme0", 00:27:08.446 "trtype": "TCP", 00:27:08.446 "adrfam": "IPv4", 00:27:08.446 "traddr": "127.0.0.1", 00:27:08.446 "trsvcid": "4420", 00:27:08.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:08.446 "prchk_reftag": false, 00:27:08.446 "prchk_guard": false, 00:27:08.446 "ctrlr_loss_timeout_sec": 0, 00:27:08.446 "reconnect_delay_sec": 0, 00:27:08.446 "fast_io_fail_timeout_sec": 0, 00:27:08.446 "psk": "key0", 00:27:08.446 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:08.446 "hdgst": false, 00:27:08.446 "ddgst": false 00:27:08.446 } 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "method": "bdev_nvme_set_hotplug", 00:27:08.446 "params": { 00:27:08.446 "period_us": 100000, 00:27:08.446 "enable": false 00:27:08.446 } 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "method": "bdev_wait_for_examine" 00:27:08.446 } 00:27:08.446 ] 00:27:08.446 }, 00:27:08.446 { 00:27:08.446 "subsystem": "nbd", 00:27:08.446 "config": [] 00:27:08.446 } 00:27:08.446 ] 00:27:08.446 }' 00:27:08.446 01:01:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:08.446 01:01:01 -- common/autotest_common.sh@10 -- # set +x 00:27:08.446 [2024-04-27 01:01:01.067349] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:27:08.446 [2024-04-27 01:01:01.067396] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1858256 ] 00:27:08.446 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.446 [2024-04-27 01:01:01.121277] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.704 [2024-04-27 01:01:01.200575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.704 [2024-04-27 01:01:01.351088] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:09.267 01:01:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:09.267 01:01:01 -- common/autotest_common.sh@850 -- # return 0 00:27:09.267 01:01:01 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:09.267 01:01:01 -- keyring/file.sh@120 -- # jq length 00:27:09.267 01:01:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.524 01:01:02 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:09.524 01:01:02 -- keyring/file.sh@121 -- # get_refcnt key0 00:27:09.524 01:01:02 -- keyring/common.sh@12 -- # get_key key0 00:27:09.524 01:01:02 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:09.524 01:01:02 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:09.524 01:01:02 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:09.524 01:01:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.781 01:01:02 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:09.781 01:01:02 -- keyring/file.sh@122 -- # get_refcnt key1 00:27:09.781 01:01:02 -- keyring/common.sh@12 -- # get_key key1 00:27:09.781 01:01:02 -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:09.781 01:01:02 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:09.781 01:01:02 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:09.781 01:01:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:09.781 01:01:02 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:09.781 01:01:02 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:09.781 01:01:02 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:09.781 01:01:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:10.039 01:01:02 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:10.039 01:01:02 -- keyring/file.sh@1 -- # cleanup 00:27:10.039 01:01:02 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.XlwoBs3h3g /tmp/tmp.5Jb9lwBKiG 00:27:10.039 01:01:02 -- keyring/file.sh@20 -- # killprocess 1858256 00:27:10.039 01:01:02 -- common/autotest_common.sh@936 -- # '[' -z 1858256 ']' 00:27:10.039 01:01:02 -- common/autotest_common.sh@940 -- # kill -0 1858256 00:27:10.039 01:01:02 -- common/autotest_common.sh@941 -- # uname 00:27:10.039 01:01:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:10.039 01:01:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1858256 00:27:10.039 01:01:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:10.039 01:01:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:10.039 01:01:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1858256' 00:27:10.039 killing process with pid 1858256 00:27:10.039 01:01:02 -- common/autotest_common.sh@955 -- # kill 1858256 00:27:10.039 Received shutdown signal, test time was about 1.000000 seconds 00:27:10.039 00:27:10.039 Latency(us) 00:27:10.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.039 =================================================================================================================== 00:27:10.039 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:10.039 01:01:02 -- common/autotest_common.sh@960 -- # wait 1858256 00:27:10.298 01:01:02 -- keyring/file.sh@21 -- # killprocess 1856609 00:27:10.298 01:01:02 -- common/autotest_common.sh@936 -- # '[' -z 1856609 ']' 00:27:10.298 01:01:02 -- common/autotest_common.sh@940 -- # kill -0 1856609 00:27:10.298 01:01:02 -- common/autotest_common.sh@941 -- # uname 00:27:10.298 01:01:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:10.298 01:01:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1856609 00:27:10.298 01:01:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:10.298 01:01:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:10.298 01:01:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1856609' 00:27:10.298 killing process with pid 1856609 00:27:10.298 01:01:02 -- common/autotest_common.sh@955 -- # kill 1856609 00:27:10.298 [2024-04-27 01:01:02.875716] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:10.298 01:01:02 -- common/autotest_common.sh@960 -- # wait 1856609 00:27:10.556 00:27:10.556 real 0m12.054s 00:27:10.556 user 0m28.118s 00:27:10.556 sys 0m2.702s 00:27:10.556 01:01:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:10.556 01:01:03 -- common/autotest_common.sh@10 -- # set +x 00:27:10.556 ************************************ 00:27:10.556 END TEST keyring_file 00:27:10.556 ************************************ 00:27:10.812 01:01:03 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:27:10.812 01:01:03 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:27:10.812 01:01:03 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:10.812 01:01:03 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:10.812 01:01:03 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:27:10.812 01:01:03 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:27:10.812 01:01:03 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:27:10.812 01:01:03 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:27:10.812 01:01:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:10.812 01:01:03 -- common/autotest_common.sh@10 -- # set +x 00:27:10.812 01:01:03 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:27:10.812 01:01:03 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:27:10.812 01:01:03 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:27:10.812 01:01:03 -- common/autotest_common.sh@10 -- # set +x 00:27:14.995 INFO: APP EXITING 00:27:14.995 INFO: killing all VMs 00:27:14.995 INFO: killing vhost app 00:27:14.995 INFO: EXIT DONE 00:27:17.532 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:27:17.532 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:27:17.532 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:27:17.790 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:27:17.790 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:27:17.790 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:27:20.321 Cleaning 00:27:20.321 Removing: /var/run/dpdk/spdk0/config 00:27:20.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:20.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:20.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:20.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:20.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:20.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:20.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:20.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:20.594 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:20.594 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:20.594 Removing: /var/run/dpdk/spdk1/config 00:27:20.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:20.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:20.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:20.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:20.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:20.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:20.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:20.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:20.594 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:20.594 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:20.594 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:20.594 Removing: /var/run/dpdk/spdk2/config 00:27:20.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:20.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:20.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:20.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:20.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:20.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:20.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:20.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:20.594 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:20.594 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:20.594 Removing: /var/run/dpdk/spdk3/config 00:27:20.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:20.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:20.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:20.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:20.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:20.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:20.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:20.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:20.594 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:20.594 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:20.594 Removing: /var/run/dpdk/spdk4/config 00:27:20.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:20.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:20.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:20.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:20.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:20.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:20.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:20.594 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:20.594 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:20.594 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:20.594 Removing: /dev/shm/bdev_svc_trace.1 00:27:20.594 Removing: /dev/shm/nvmf_trace.0 00:27:20.594 Removing: /dev/shm/spdk_tgt_trace.pid1504786 00:27:20.594 Removing: /var/run/dpdk/spdk0 00:27:20.594 Removing: /var/run/dpdk/spdk1 00:27:20.594 Removing: /var/run/dpdk/spdk2 00:27:20.874 Removing: /var/run/dpdk/spdk3 00:27:20.874 Removing: /var/run/dpdk/spdk4 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1502368 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1503471 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1504786 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1505471 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1506423 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1506661 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1507653 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1507877 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1508230 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1509734 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1511018 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1511314 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1511607 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1512125 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1512441 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1512698 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1512955 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1513244 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1514225 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1517644 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1518020 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1518343 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1518521 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1519017 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1519105 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1519535 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1519755 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1520033 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1520260 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1520505 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1520545 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1521108 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1521362 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1521664 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1521947 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1522198 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1522290 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1522548 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1522872 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1523205 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1523532 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1523790 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1524048 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1524301 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1524566 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1524820 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1525080 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1525347 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1525666 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1526006 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1526322 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1526577 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1526837 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1527093 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1527360 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1527615 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1527876 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1528166 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1528490 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1532215 00:27:20.874 Removing: /var/run/dpdk/spdk_pid1577054 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1581389 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1590283 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1595685 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1599921 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1600612 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1612143 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1612148 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1613189 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1614492 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1615391 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1615876 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1615887 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1616115 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1616343 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1616346 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1617261 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1618071 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1618877 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1619561 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1619563 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1619802 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1621046 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1622259 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1630596 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1630846 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1635121 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1641001 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1643602 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1654018 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1663443 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1665207 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1666187 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1682706 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1686597 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1690915 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1692692 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1694531 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1694779 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1695016 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1695033 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1695753 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1697556 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1698577 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1699080 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1701341 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1702417 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1703147 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1707192 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1716917 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1720951 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1726959 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1728482 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1730003 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1734349 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1738392 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1745977 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1745979 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1750470 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1750699 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1750933 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1751454 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1751493 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1756161 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1756736 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1761076 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1763838 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1769399 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1774567 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1781674 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1781720 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1799603 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1800295 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1800992 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1801698 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1802986 00:27:21.201 Removing: /var/run/dpdk/spdk_pid1803670 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1804369 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1805066 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1809315 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1809555 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1815619 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1815878 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1818125 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1825875 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1825881 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1831006 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1833012 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1834941 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1836136 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1838106 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1839192 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1848446 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1848980 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1849578 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1851850 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1852318 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1852783 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1856609 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1856646 00:27:21.477 Removing: /var/run/dpdk/spdk_pid1858256 00:27:21.477 Clean 00:27:21.477 01:01:14 -- common/autotest_common.sh@1437 -- # return 0 00:27:21.477 01:01:14 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:27:21.477 01:01:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:21.477 01:01:14 -- common/autotest_common.sh@10 -- # set +x 00:27:21.735 01:01:14 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:27:21.735 01:01:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:21.735 01:01:14 -- common/autotest_common.sh@10 -- # set +x 00:27:21.736 01:01:14 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:21.736 01:01:14 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:21.736 01:01:14 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:21.736 01:01:14 -- spdk/autotest.sh@389 -- # hash lcov 00:27:21.736 01:01:14 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:21.736 01:01:14 -- spdk/autotest.sh@391 -- # hostname 00:27:21.736 01:01:14 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:21.736 geninfo: WARNING: invalid characters removed from testname! 00:27:43.677 01:01:33 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:43.677 01:01:36 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:45.582 01:01:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:47.488 01:01:39 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:48.880 01:01:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:50.788 01:01:43 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:52.698 01:01:44 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:52.698 01:01:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:52.698 01:01:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:52.699 01:01:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:52.699 01:01:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:52.699 01:01:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.699 01:01:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.699 01:01:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.699 01:01:45 -- paths/export.sh@5 -- $ export PATH 00:27:52.699 01:01:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:52.699 01:01:45 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:27:52.699 01:01:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:52.699 01:01:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714172505.XXXXXX 00:27:52.699 01:01:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714172505.a40QQQ 00:27:52.699 01:01:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:52.699 01:01:45 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:27:52.699 01:01:45 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:27:52.699 01:01:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:27:52.699 01:01:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:27:52.699 01:01:45 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:52.699 01:01:45 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:27:52.699 01:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:27:52.699 01:01:45 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:27:52.699 01:01:45 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:27:52.699 01:01:45 -- pm/common@17 -- $ local monitor 00:27:52.699 01:01:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:52.699 01:01:45 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1867793 00:27:52.699 01:01:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:52.699 01:01:45 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1867795 00:27:52.699 01:01:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:52.699 01:01:45 -- pm/common@21 -- $ date +%s 00:27:52.699 01:01:45 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1867797 00:27:52.699 01:01:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:52.699 01:01:45 -- pm/common@21 -- $ date +%s 00:27:52.699 01:01:45 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=1867800 00:27:52.699 01:01:45 -- pm/common@26 -- $ sleep 1 00:27:52.699 01:01:45 -- pm/common@21 -- $ date +%s 00:27:52.699 01:01:45 -- pm/common@21 -- $ date +%s 00:27:52.699 01:01:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714172505 00:27:52.699 01:01:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714172505 00:27:52.699 01:01:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714172505 00:27:52.699 01:01:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714172505 00:27:52.699 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714172505_collect-cpu-temp.pm.log 00:27:52.699 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714172505_collect-cpu-load.pm.log 00:27:52.699 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714172505_collect-vmstat.pm.log 00:27:52.699 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714172505_collect-bmc-pm.bmc.pm.log 00:27:53.639 01:01:46 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:27:53.639 01:01:46 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:27:53.639 01:01:46 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:53.639 01:01:46 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:53.639 01:01:46 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:53.639 01:01:46 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:53.639 01:01:46 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:53.639 01:01:46 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:53.639 01:01:46 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:53.639 01:01:46 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:53.639 01:01:46 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:53.639 01:01:46 -- pm/common@30 -- $ signal_monitor_resources TERM 00:27:53.639 01:01:46 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:27:53.639 01:01:46 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:53.639 01:01:46 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:27:53.639 01:01:46 -- pm/common@45 -- $ pid=1867808 00:27:53.639 01:01:46 -- pm/common@52 -- $ sudo kill -TERM 1867808 00:27:53.639 01:01:46 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:53.639 01:01:46 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:27:53.639 01:01:46 -- pm/common@45 -- $ pid=1867811 00:27:53.639 01:01:46 -- pm/common@52 -- $ sudo kill -TERM 1867811 00:27:53.639 01:01:46 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:53.639 01:01:46 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:27:53.639 01:01:46 -- pm/common@45 -- $ pid=1867809 00:27:53.639 01:01:46 -- pm/common@52 -- $ sudo kill -TERM 1867809 00:27:53.639 01:01:46 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:53.639 01:01:46 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:27:53.639 01:01:46 -- pm/common@45 -- $ pid=1867810 00:27:53.639 01:01:46 -- pm/common@52 -- $ sudo kill -TERM 1867810 00:27:53.639 + [[ -n 1399536 ]] 00:27:53.639 + sudo kill 1399536 00:27:53.650 [Pipeline] } 00:27:53.669 [Pipeline] // stage 00:27:53.674 [Pipeline] } 00:27:53.691 [Pipeline] // timeout 00:27:53.695 [Pipeline] } 00:27:53.710 [Pipeline] // catchError 00:27:53.716 [Pipeline] } 00:27:53.733 [Pipeline] // wrap 00:27:53.739 [Pipeline] } 00:27:53.756 [Pipeline] // catchError 00:27:53.764 [Pipeline] stage 00:27:53.765 [Pipeline] { (Epilogue) 00:27:53.779 [Pipeline] catchError 00:27:53.781 [Pipeline] { 00:27:53.798 [Pipeline] echo 00:27:53.800 Cleanup processes 00:27:53.806 [Pipeline] sh 00:27:54.091 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:54.091 1867923 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:27:54.091 1868213 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:54.106 [Pipeline] sh 00:27:54.390 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:54.390 ++ grep -v 'sudo pgrep' 00:27:54.390 ++ awk '{print $1}' 00:27:54.390 + sudo kill -9 1867923 00:27:54.402 [Pipeline] sh 00:27:54.685 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:02.828 [Pipeline] sh 00:28:03.112 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:03.113 Artifacts sizes are good 00:28:03.128 [Pipeline] archiveArtifacts 00:28:03.136 Archiving artifacts 00:28:03.302 [Pipeline] sh 00:28:03.642 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:03.658 [Pipeline] cleanWs 00:28:03.668 [WS-CLEANUP] Deleting project workspace... 00:28:03.668 [WS-CLEANUP] Deferred wipeout is used... 00:28:03.675 [WS-CLEANUP] done 00:28:03.677 [Pipeline] } 00:28:03.704 [Pipeline] // catchError 00:28:03.719 [Pipeline] sh 00:28:03.999 + logger -p user.info -t JENKINS-CI 00:28:04.007 [Pipeline] } 00:28:04.023 [Pipeline] // stage 00:28:04.029 [Pipeline] } 00:28:04.046 [Pipeline] // node 00:28:04.052 [Pipeline] End of Pipeline 00:28:04.090 Finished: SUCCESS